Sample records for purpose image analysis

  1. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  2. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  3. Self-Organizing Neural Network Map for the Purpose of Visualizing the Concept Images of Students on Angles

    ERIC Educational Resources Information Center

    Kaya, Deniz

    2017-01-01

    The purpose of the study is to perform a less-dimensional thorough visualization process for the purpose of determining the images of the students on the concept of angle. The Ward clustering analysis combined with Self-Organizing Neural Network Map (SOM) has been used for the dimension process. The Conceptual Understanding Tool, which consisted…

  4. An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis

    NASA Astrophysics Data System (ADS)

    Kim, Yongmin; Alexander, Thomas

    1986-06-01

    In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.

  5. Analysis of the Image of Scientists Portrayed in the Lebanese National Science Textbooks

    ERIC Educational Resources Information Center

    Yacoubian, Hagop A.; Al-Khatib, Layan; Mardirossian, Taline

    2017-01-01

    This article presents an analysis of how scientists are portrayed in the Lebanese national science textbooks. The purpose of this study was twofold. First, to develop a comprehensive analytical framework that can serve as a tool to analyze the image of scientists portrayed in educational resources. Second, to analyze the image of scientists…

  6. Design and validation of Segment--freely available software for cardiovascular image analysis.

    PubMed

    Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan

    2010-01-11

    Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is a well-validated comprehensive software package for cardiovascular image analysis. It is freely available for research purposes provided that relevant original research publications related to the software are cited.

  7. Polymeric spatial resolution test patterns for mass spectrometry imaging using nano-thermal analysis with atomic force microscopy

    DOE PAGES

    Tai, Tamin; Kertesz, Vilmos; Lin, Ming -Wei; ...

    2017-05-11

    As the spatial resolution of mass spectrometry imaging technologies has begun to reach into the nanometer regime, finding readily available or easily made resolution reference materials has become particularly challenging for molecular imaging purposes. This study describes the fabrication, characterization and use of vertical line array polymeric spatial resolution test patterns for nano-thermal analysis/atomic force microscopy/mass spectrometry chemical imaging.

  8. Polymeric spatial resolution test patterns for mass spectrometry imaging using nano-thermal analysis with atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tai, Tamin; Kertesz, Vilmos; Lin, Ming -Wei

    As the spatial resolution of mass spectrometry imaging technologies has begun to reach into the nanometer regime, finding readily available or easily made resolution reference materials has become particularly challenging for molecular imaging purposes. This study describes the fabrication, characterization and use of vertical line array polymeric spatial resolution test patterns for nano-thermal analysis/atomic force microscopy/mass spectrometry chemical imaging.

  9. A system for the real-time display of radar and video images of targets

    NASA Technical Reports Server (NTRS)

    Allen, W. W.; Burnside, W. D.

    1990-01-01

    Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.

  10. A Marker-Based Approach for the Automated Selection of a Single Segmentation from a Hierarchical Set of Image Segmentations

    NASA Technical Reports Server (NTRS)

    Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.

    2012-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.

  11. A manual for microcomputer image analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rich, P.M.; Ranken, D.M.; George, J.S.

    1989-12-01

    This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less

  12. Change in Tongue Morphology in Response to Expiratory Resistance Loading Investigated by Magnetic Resonance Imaging

    PubMed Central

    Yanagisawa, Yukio; Matsuo, Yoshimi; Shuntoh, Hisato; Mitamura, Masaaki; Horiuchi, Noriaki

    2013-01-01

    [Purpose] The purpose of this study was to investigate the effect of expiratory resistance load on the tongue area encompassing the suprahyoid and genioglossus muscles. [Subjects] The subjects were 30 healthy individuals (15 males, 15 females, mean age: 28.9 years). [Methods] Magnetic resonance imaging was used to investigate morphological changes in response to resistive expiratory pressure loading in the area encompassing the suprahyoid and genioglossus muscles. Images were taken when water pressure was sustained at 0%, 10%, 30%, and 50% of maximum resistive expiratory pressure. We then measured tongue area using image analysis software, and the morphological changes were analyzed using repeated measures analysis of variance followed by post hoc comparisons. [Results] A significant change in the tongue area was detected in both sexes upon loading. Multiple comparison analysis revealed further significant differences in tongue area as well as changes in tongue area in response to the different expiratory pressures. [Conclusion] The findings demonstrate that higher expiratory pressure facilitates greater reduction in tongue area. PMID:24259824

  13. Remote sensor digital image data analysis using the General Electric Image 100 analysis system (a study of analysis speed, cost, and performance)

    NASA Technical Reports Server (NTRS)

    Mcmurtry, G. J.; Petersen, G. W. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. It was found that the high speed man machine interaction capability is a distinct advantage of the image 100; however, the small size of the digital computer in the system is a definite limitation. The system can be highly useful in an analysis mode in which it complements a large general purpose computer. The image 100 was found to be extremely valuable in the analysis of aircraft MSS data where the spatial resolution begins to approach photographic quality and the analyst can exercise interpretation judgements and readily interact with the machine.

  14. Notes for Brazil sampling frame evaluation trip

    NASA Technical Reports Server (NTRS)

    Horvath, R. (Principal Investigator); Hicks, D. R. (Compiler)

    1981-01-01

    Field notes describing a trip conducted in Brazil are presented. This trip was conducted for the purpose of evaluating a sample frame developed using LANDSAT full frame images by the USDA Economic and Statistics Service for the eventual purpose of cropland production estimation with LANDSAT by the Foreign Commodity Production Forecasting Project of the AgRISTARS program. Six areas were analyzed on the basis of land use, crop land in corn and soybean, field size and soil type. The analysis indicated generally successful use of LANDSAT images for purposes of remote large area land use stratification.

  15. Image Processing Software

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.

  16. Magnetic resonance imaging as a tool for extravehicular activity analysis

    NASA Technical Reports Server (NTRS)

    Dickenson, R.; Lorenz, C.; Peterson, S.; Strauss, A.; Main, J.

    1992-01-01

    The purpose of this research is to examine the value of magnetic resonance imaging (MRI) as a means of conducting kinematic studies of the hand for the purpose of EVA capability enhancement. After imaging the subject hand using a magnetic resonance scanner, the resulting 2D slices were reconstructed into a 3D model of the proximal phalanx of the left hand. Using the coordinates of several landmark positions, one is then able to decompose the motion of the rigid body. MRI offers highly accurate measurements due to its tomographic nature without the problems associated with other imaging modalities for in vivo studies.

  17. Brain imaging registry for neurologic diagnosis and research

    NASA Astrophysics Data System (ADS)

    Hoo, Kent S., Jr.; Wong, Stephen T. C.; Knowlton, Robert C.; Young, Geoffrey S.; Walker, John; Cao, Xinhua; Dillon, William P.; Hawkins, Randall A.; Laxer, Kenneth D.

    2002-05-01

    The purpose of this paper is to demonstrate the importance of building a brain imaging registry (BIR) on top of existing medical information systems including Picture Archiving Communication Systems (PACS) environment. We describe the design framework for a cluster of data marts whose purpose is to provide clinicians and researchers efficient access to a large volume of raw and processed patient images and associated data originating from multiple operational systems over time and spread out across different hospital departments and laboratories. The framework is designed using object-oriented analysis and design methodology. The BIR data marts each contain complete image and textual data relating to patients with a particular disease.

  18. Fundamental remote sensing science research program. Part 1: Status report of the mathematical pattern recognition and image analysis project

    NASA Technical Reports Server (NTRS)

    Heydorn, R. D.

    1984-01-01

    The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of the Earth from remotely sensed measurement of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inference about the Earth.

  19. Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images

    PubMed Central

    Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz

    2016-01-01

    The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692

  20. Fundamentals of quantitative dynamic contrast-enhanced MR imaging.

    PubMed

    Paldino, Michael J; Barboriak, Daniel P

    2009-05-01

    Quantitative analysis of dynamic contrast-enhanced MR imaging (DCE-MR imaging) has the power to provide information regarding physiologic characteristics of the microvasculature and is, therefore, of great potential value to the practice of oncology. In particular, these techniques could have a significant impact on the development of novel anticancer therapies as a promising biomarker of drug activity. Standardization of DCE-MR imaging acquisition and analysis to provide more reproducible measures of tumor vessel physiology is of crucial importance to realize this potential. The purpose of this article is to review the pathophysiologic basis and technical aspects of DCE-MR imaging techniques.

  1. The Image of People with Intellectual Disability in Taiwan Newspapers

    ERIC Educational Resources Information Center

    Chen, Chih-Hsuan; Hsu, Kan-Lin; Shu, Bih-Ching; Fetzer, Susan

    2012-01-01

    Background: There is limited research on the development of newspaper analysis about the images of people with ID in Chinese newspapers. The purpose of this study was: (a) to understand the general image of persons with ID presented in printed newspapers in Taiwan, and (b) to classify the various images of persons with ID and to measure the…

  2. Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree

    ERIC Educational Resources Information Center

    Chen, Wei-Bang

    2012-01-01

    The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…

  3. Single-Cell Analysis Using Hyperspectral Imaging Modalities.

    PubMed

    Mehta, Nishir; Shaik, Shahensha; Devireddy, Ram; Gartia, Manas Ranjan

    2018-02-01

    Almost a decade ago, hyperspectral imaging (HSI) was employed by the NASA in satellite imaging applications such as remote sensing technology. This technology has since been extensively used in the exploration of minerals, agricultural purposes, water resources, and urban development needs. Due to recent advancements in optical re-construction and imaging, HSI can now be applied down to micro- and nanometer scales possibly allowing for exquisite control and analysis of single cell to complex biological systems. This short review provides a description of the working principle of the HSI technology and how HSI can be used to assist, substitute, and validate traditional imaging technologies. This is followed by a description of the use of HSI for biological analysis and medical diagnostics with emphasis on single-cell analysis using HSI.

  4. An evaluation of the directed flow graph methodology

    NASA Technical Reports Server (NTRS)

    Snyder, W. E.; Rajala, S. A.

    1984-01-01

    The applicability of the Directed Graph Methodology (DGM) to the design and analysis of special purpose image and signal processing hardware was evaluated. A special purpose image processing system was designed and described using DGM. The design, suitable for very large scale integration (VLSI) implements a region labeling technique. Two computer chips were designed, both using metal-nitride-oxide-silicon (MNOS) technology, as well as a functional system utilizing those chips to perform real time region labeling. The system is described in terms of DGM primitives. As it is currently implemented, DGM is inappropriate for describing synchronous, tightly coupled, special purpose systems. The nature of the DGM formalism lends itself more readily to modeling networks of general purpose processors.

  5. Quantitative analysis of phosphoinositide 3-kinase (PI3K) signaling using live-cell total internal reflection fluorescence (TIRF) microscopy.

    PubMed

    Johnson, Heath E; Haugh, Jason M

    2013-12-02

    This unit focuses on the use of total internal reflection fluorescence (TIRF) microscopy and image analysis methods to study the dynamics of signal transduction mediated by class I phosphoinositide 3-kinases (PI3Ks) in mammalian cells. The first four protocols cover live-cell imaging experiments, image acquisition parameters, and basic image processing and segmentation. These methods are generally applicable to live-cell TIRF experiments. The remaining protocols outline more advanced image analysis methods, which were developed in our laboratory for the purpose of characterizing the spatiotemporal dynamics of PI3K signaling. These methods may be extended to analyze other cellular processes monitored using fluorescent biosensors. Copyright © 2013 John Wiley & Sons, Inc.

  6. Magnetic Resonance Imaging Assessment of the Velopharyngeal Mechanism at Rest and during Speech in Chinese Adults and Children

    ERIC Educational Resources Information Center

    Tian, Wei; Yin, Heng; Redett, Richard J.; Shi, Bing; Shi, Jin; Zhang, Rui; Zheng, Qian

    2010-01-01

    Purpose: Recent applications of the magnetic resonance imaging (MRI) technique introduced accurate 3-dimensional measurements of the velopharyngeal mechanism. Further standardization of the data acquisition and analysis protocol was successfully applied to imaging adults at rest and during phonation. This study was designed to test and modify a…

  7. Application-Driven No-Reference Quality Assessment for Dermoscopy Images With Multiple Distortions.

    PubMed

    Xie, Fengying; Lu, Yanan; Bovik, Alan C; Jiang, Zhiguo; Meng, Rusong

    2016-06-01

    Dermoscopy images often suffer from blur and uneven illumination distortions that occur during acquisition, which can adversely influence consequent automatic image analysis results on potential lesion objects. The purpose of this paper is to deploy an algorithm that can automatically assess the quality of dermoscopy images. Such an algorithm could be used to direct image recapture or correction. We describe an application-driven no-reference image quality assessment (IQA) model for dermoscopy images affected by possibly multiple distortions. For this purpose, we created a multiple distortion dataset of dermoscopy images impaired by varying degrees of blur and uneven illumination. The basis of this model is two single distortion IQA metrics that are sensitive to blur and uneven illumination, respectively. The outputs of these two metrics are combined to predict the quality of multiply distorted dermoscopy images using a fuzzy neural network. Unlike traditional IQA algorithms, which use human subjective score as ground truth, here ground truth is driven by the application, and generated according to the degree of influence of the distortions on lesion analysis. The experimental results reveal that the proposed model delivers accurate and stable quality prediction results for dermoscopy images impaired by multiple distortions. The proposed model is effective for quality assessment of multiple distorted dermoscopy images. An application-driven concept for IQA is introduced, and at the same time, a solution framework for the IQA of multiple distortions is proposed.

  8. LANDSAT-4 image data quality analysis for energy related applications. [nuclear power plant sites

    NASA Technical Reports Server (NTRS)

    Wukelic, G. E. (Principal Investigator)

    1983-01-01

    No useable LANDSAT 4 TM data were obtained for the Hanford site in the Columbia Plateau region, but TM simulator data for a Virginia Electric Company nuclear power plant was used to test image processing algorithms. Principal component analyses of this data set clearly indicated that thermal plumes in surface waters used for reactor cooling would be discrenible. Image processing and analysis programs were successfully testing using the 7 band Arkansas test scene and preliminary analysis of TM data for the Savanah River Plant shows that current interactive, image enhancement, analysis and integration techniques can be effectively used for LANDSAT 4 data. Thermal band data appear adequate for gross estimates of thermal changes occurring near operating nuclear facilities especially in surface water bodies being used for reactor cooling purposes. Additional image processing software was written and tested which provides for more rapid and effective analysis of the 7 band TM data.

  9. Automated daily quality control analysis for mammography in a multi-unit imaging center.

    PubMed

    Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli

    2018-01-01

    Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.

  10. CALIPSO: an interactive image analysis software package for desktop PACS workstations

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Huang, H. K.

    1990-07-01

    The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade

  11. Medical Imaging System

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The MD Image System, a true-color image processing system that serves as a diagnostic aid and tool for storage and distribution of images, was developed by Medical Image Management Systems, Huntsville, AL, as a "spinoff from a spinoff." The original spinoff, Geostar 8800, developed by Crystal Image Technologies, Huntsville, incorporates advanced UNIX versions of ELAS (developed by NASA's Earth Resources Laboratory for analysis of Landsat images) for general purpose image processing. The MD Image System is an application of this technology to a medical system that aids in the diagnosis of cancer, and can accept, store and analyze images from other sources such as Magnetic Resonance Imaging.

  12. Objects Grouping for Segmentation of Roads Network in High Resolution Images of Urban Areas

    NASA Astrophysics Data System (ADS)

    Maboudi, M.; Amini, J.; Hahn, M.

    2016-06-01

    Updated road databases are required for many purposes such as urban planning, disaster management, car navigation, route planning, traffic management and emergency handling. In the last decade, the improvement in spatial resolution of VHR civilian satellite sensors - as the main source of large scale mapping applications - was so considerable that GSD has become finer than size of common urban objects of interest such as building, trees and road parts. This technological advancement pushed the development of "Object-based Image Analysis (OBIA)" as an alternative to pixel-based image analysis methods. Segmentation as one of the main stages of OBIA provides the image objects on which most of the following processes will be applied. Therefore, the success of an OBIA approach is strongly affected by the segmentation quality. In this paper, we propose a purpose-dependent refinement strategy in order to group road segments in urban areas using maximal similarity based region merging. For investigations with the proposed method, we use high resolution images of some urban sites. The promising results suggest that the proposed approach is applicable in grouping of road segments in urban areas.

  13. Software phantom with realistic speckle modeling for validation of image analysis methods in echocardiography

    NASA Astrophysics Data System (ADS)

    Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten

    2014-03-01

    Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.

  14. Computer-assisted image processing to detect spores from the fungus Pandora neoaphidis.

    PubMed

    Korsnes, Reinert; Westrum, Karin; Fløistad, Erling; Klingen, Ingeborg

    2016-01-01

    This contribution demonstrates an example of experimental automatic image analysis to detect spores prepared on microscope slides derived from trapping. The application is to monitor aerial spore counts of the entomopathogenic fungus Pandora neoaphidis which may serve as a biological control agent for aphids. Automatic detection of such spores can therefore play a role in plant protection. The present approach for such detection is a modification of traditional manual microscopy of prepared slides, where autonomous image recording precedes computerised image analysis. The purpose of the present image analysis is to support human visual inspection of imagery data - not to replace it. The workflow has three components:•Preparation of slides for microscopy.•Image recording.•Computerised image processing where the initial part is, as usual, segmentation depending on the actual data product. Then comes identification of blobs, calculation of principal axes of blobs, symmetry operations and projection on a three parameter egg shape space.

  15. "Drinking Deeply with Delight": An Investigation of Transformative Images in Isaiah 1 and 65-66

    ERIC Educational Resources Information Center

    Radford, Peter

    2016-01-01

    This project examines the images used in the beginning and ending chapters of Isaiah. The purpose of this project is to trace the transformation of specific images from their introduction in Isaiah 1 to their re-interpretation in Isaiah 65-66. While this analysis uses the verbal parallels (shared vocabulary) as a starting point, the present…

  16. Automatic morphological classification of galaxy images

    PubMed Central

    Shamir, Lior

    2009-01-01

    We describe an image analysis supervised learning algorithm that can automatically classify galaxy images. The algorithm is first trained using a manually classified images of elliptical, spiral, and edge-on galaxies. A large set of image features is extracted from each image, and the most informative features are selected using Fisher scores. Test images can then be classified using a simple Weighted Nearest Neighbor rule such that the Fisher scores are used as the feature weights. Experimental results show that galaxy images from Galaxy Zoo can be classified automatically to spiral, elliptical and edge-on galaxies with accuracy of ~90% compared to classifications carried out by the author. Full compilable source code of the algorithm is available for free download, and its general-purpose nature makes it suitable for other uses that involve automatic image analysis of celestial objects. PMID:20161594

  17. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  18. Singular spectrum decomposition of Bouligand-Minkowski fractal descriptors: an application to the classification of texture Images

    NASA Astrophysics Data System (ADS)

    Florindo, João. Batista

    2018-04-01

    This work proposes the use of Singular Spectrum Analysis (SSA) for the classification of texture images, more specifically, to enhance the performance of the Bouligand-Minkowski fractal descriptors in this task. Fractal descriptors are known to be a powerful approach to model and particularly identify complex patterns in natural images. Nevertheless, the multiscale analysis involved in those descriptors makes them highly correlated. Although other attempts to address this point was proposed in the literature, none of them investigated the relation between the fractal correlation and the well-established analysis employed in time series. And SSA is one of the most powerful techniques for this purpose. The proposed method was employed for the classification of benchmark texture images and the results were compared with other state-of-the-art classifiers, confirming the potential of this analysis in image classification.

  19. Adaptive image inversion of contrast 3D echocardiography for enabling automated analysis.

    PubMed

    Shaheen, Anjuman; Rajpoot, Kashif

    2015-08-01

    Contrast 3D echocardiography (C3DE) is commonly used to enhance the visual quality of ultrasound images in comparison with non-contrast 3D echocardiography (3DE). Although the image quality in C3DE is perceived to be improved for visual analysis, however it actually deteriorates for the purpose of automatic or semi-automatic analysis due to higher speckle noise and intensity inhomogeneity. Therefore, the LV endocardial feature extraction and segmentation from the C3DE images remains a challenging problem. To address this challenge, this work proposes an adaptive pre-processing method to invert the appearance of C3DE image. The image inversion is based on an image intensity threshold value which is automatically estimated through image histogram analysis. In the inverted appearance, the LV cavity appears dark while the myocardium appears bright thus making it similar in appearance to a 3DE image. Moreover, the resulting inverted image has high contrast and low noise appearance, yielding strong LV endocardium boundary and facilitating feature extraction for segmentation. Our results demonstrate that the inverse appearance of contrast image enables the subsequent LV segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Analysis of the Tail Structures of Comet 1P/Halley 1910 II

    NASA Astrophysics Data System (ADS)

    Voelzke, Marcos Rincon

    2013-11-01

    For the purpose of identifying, measuring, and correlating the morphological structures along the plasma tail of 1P/Halley, 886 images from September 1909 to May 1911 are analysed. These images are from the Atlas of Comet Halley 1910 II (DONN; RAHE; BRANDT, 1986).

  1. Characterizing Articulation in Apraxic Speech Using Real-Time Magnetic Resonance Imaging

    ERIC Educational Resources Information Center

    Hagedorn, Christina; Proctor, Michael; Goldstein, Louis; Wilson, Stephen M.; Miller, Bruce; Gorno-Tempini, Maria Luisa; Narayanan, Shrikanth S.

    2017-01-01

    Purpose: Real-time magnetic resonance imaging (MRI) and accompanying analytical methods are shown to capture and quantify salient aspects of apraxic speech, substantiating and expanding upon evidence provided by clinical observation and acoustic and kinematic data. Analysis of apraxic speech errors within a dynamic systems framework is provided…

  2. Comparison of two freely available software packages for mass spectrometry imaging data analysis using brains from morphine addicted rats.

    PubMed

    Bodzon-Kulakowska, Anna; Marszalek-Grabska, Marta; Antolak, Anna; Drabik, Anna; Kotlinska, Jolanta H; Suder, Piotr

    Data analysis from mass spectrometry imaging (MSI) imaging experiments is a very complex task. Most of the software packages devoted to this purpose are designed by the mass spectrometer manufacturers and, thus, are not freely available. Laboratories developing their own MS-imaging sources usually do not have access to the commercial software, and they must rely on the freely available programs. The most recognized ones are BioMap, developed by Novartis under Interactive Data Language (IDL), and Datacube, developed by the Dutch Foundation for Fundamental Research of Matter (FOM-Amolf). These two systems were used here for the analysis of images received from rat brain tissues subjected to morphine influence and their capabilities were compared in terms of ease of use and the quality of obtained results.

  3. Automated image alignment for 2D gel electrophoresis in a high-throughput proteomics pipeline.

    PubMed

    Dowsey, Andrew W; Dunn, Michael J; Yang, Guang-Zhong

    2008-04-01

    The quest for high-throughput proteomics has revealed a number of challenges in recent years. Whilst substantial improvements in automated protein separation with liquid chromatography and mass spectrometry (LC/MS), aka 'shotgun' proteomics, have been achieved, large-scale open initiatives such as the Human Proteome Organization (HUPO) Brain Proteome Project have shown that maximal proteome coverage is only possible when LC/MS is complemented by 2D gel electrophoresis (2-DE) studies. Moreover, both separation methods require automated alignment and differential analysis to relieve the bioinformatics bottleneck and so make high-throughput protein biomarker discovery a reality. The purpose of this article is to describe a fully automatic image alignment framework for the integration of 2-DE into a high-throughput differential expression proteomics pipeline. The proposed method is based on robust automated image normalization (RAIN) to circumvent the drawbacks of traditional approaches. These use symbolic representation at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in modelling and alignment. In RAIN, a third-order volume-invariant B-spline model is incorporated into a multi-resolution schema to correct for geometric and expression inhomogeneity at multiple scales. The normalized images can then be compared directly in the image domain for quantitative differential analysis. Through evaluation against an existing state-of-the-art method on real and synthetically warped 2D gels, the proposed analysis framework demonstrates substantial improvements in matching accuracy and differential sensitivity. High-throughput analysis is established through an accelerated GPGPU (general purpose computation on graphics cards) implementation. Supplementary material, software and images used in the validation are available at http://www.proteomegrid.org/rain/.

  4. SU-E-E-16: The Application of Texture Analysis for Differentiation of Central Cancer From Atelectasis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, M; Fan, T; Duan, J

    2015-06-15

    Purpose: Prospectively assess the potential utility of texture analysis for differentiation of central cancer from atelectasis. Methods: 0 consecutive central lung cancer patients who were referred for CT imaging and PET-CT were enrolled. Radiotherapy doctor delineate the tumor and atelectasis according to the fusion imaging based on CT image and PET-CT image. The texture parameters (such as energy, correlation, sum average, difference average, difference entropy), were obtained respectively to quantitatively discriminate tumor and atelectasis based on gray level co-occurrence matrix (GLCM) Results: The texture analysis results showed that the parameters of correlation and sum average had an obviously statistical significance(P<0.05).more » Conclusion: the results of this study indicate that texture analysis may be useful for the differentiation of central lung cancer and atelectasis.« less

  5. ConfocalGN: A minimalistic confocal image generator

    NASA Astrophysics Data System (ADS)

    Dmitrieff, Serge; Nédélec, François

    Validating image analysis pipelines and training machine-learning segmentation algorithms require images with known features. Synthetic images can be used for this purpose, with the advantage that large reference sets can be produced easily. It is however essential to obtain images that are as realistic as possible in terms of noise and resolution, which is challenging in the field of microscopy. We describe ConfocalGN, a user-friendly software that can generate synthetic microscopy stacks from a ground truth (i.e. the observed object) specified as a 3D bitmap or a list of fluorophore coordinates. This software can analyze a real microscope image stack to set the noise parameters and directly generate new images of the object with noise characteristics similar to that of the sample image. With a minimal input from the user and a modular architecture, ConfocalGN is easily integrated with existing image analysis solutions.

  6. Survey: interpolation methods for whole slide image processing.

    PubMed

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  7. Geometric error analysis for shuttle imaging spectrometer experiment

    NASA Technical Reports Server (NTRS)

    Wang, S. J.; Ih, C. H.

    1984-01-01

    The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

  8. A software platform for the analysis of dermatology images

    NASA Astrophysics Data System (ADS)

    Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon

    2017-11-01

    The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.

  9. Entrepreneurial Skills and Socio-Cultural Factors: An Empirical Analysis in Secondary Education Students

    ERIC Educational Resources Information Center

    Rosique-Blasco, Mario; Madrid-Guijarro, Antonia; García-Pérez-de-Lema, Domingo

    2016-01-01

    Purpose: The purpose of this paper is to explore how entrepreneurial skills (such as creativity, proactivity and risk tolerance) and socio-cultural factors (such as role model and businessman image) affect secondary education students' propensity towards entrepreneurial options in their future careers. Design/methodology/approach: A sample of…

  10. Change in Teacher Candidates' Metaphorical Images about Classroom Management in a Social Constructivist Learning Environment

    ERIC Educational Resources Information Center

    Akar, Hanife; Yildirim, Ali

    2009-01-01

    The purpose of this study was to understand the conceptual change teacher candidates went through in a constructivist learning environment in a classroom management course. Within a qualitative case study design, teacher candidates' metaphorical images about classroom management were obtained through document analysis before and after they were…

  11. Complications of Whipple surgery: imaging analysis.

    PubMed

    Bhosale, Priya; Fleming, Jason; Balachandran, Aparna; Charnsangavej, Chuslip; Tamm, Eric P

    2013-04-01

    The purpose of this article is to describe and illustrate anatomic findings after the Whipple procedure, and the appearance of its complications, on imaging. Knowledge of the cross-sectional anatomy following the Whipple procedure, and clinical findings for associated complications, are essential to rapidly and accurately diagnose such complications on postoperative studies in order to optimize treatment.

  12. Categorical and Specificity Differences between User-Supplied Tags and Search Query Terms for Images. An Analysis of "Flickr" Tags and Web Image Search Queries

    ERIC Educational Resources Information Center

    Chung, EunKyung; Yoon, JungWon

    2009-01-01

    Introduction: The purpose of this study is to compare characteristics and features of user supplied tags and search query terms for images on the "Flickr" Website in terms of categories of pictorial meanings and level of term specificity. Method: This study focuses on comparisons between tags and search queries using Shatford's categorization…

  13. Low-cost digital image processing at the University of Oklahoma

    NASA Technical Reports Server (NTRS)

    Harrington, J. A., Jr.

    1981-01-01

    Computer assisted instruction in remote sensing at the University of Oklahoma involves two separate approaches and is dependent upon initial preprocessing of a LANDSAT computer compatible tape using software developed for an IBM 370/158 computer. In-house generated preprocessing algorithms permits students or researchers to select a subset of a LANDSAT scene for subsequent analysis using either general purpose statistical packages or color graphic image processing software developed for Apple II microcomputers. Procedures for preprocessing the data and image analysis using either of the two approaches for low-cost LANDSAT data processing are described.

  14. The interactive astronomical data analysis facility - image enhancement techniques to Comet Halley

    NASA Astrophysics Data System (ADS)

    Klinglesmith, D. A.

    1981-10-01

    PDP 11/40 computer is at the heart of a general purpose interactive data analysis facility designed to permit easy access to data in both visual imagery and graphic representations. The major components consist of: the 11/40 CPU and 256 K bytes of 16-bit memory; two TU10 tape drives; 20 million bytes of disk storage; three user terminals; and the COMTAL image processing display system. The application of image enhancement techniques to two sequences of photographs of Comet Halley taken in Egypt in 1910 provides evidence for eruptions from the comet's nucleus.

  15. Fundamental remote science research program. Part 2: Status report of the mathematical pattern recognition and image analysis project

    NASA Technical Reports Server (NTRS)

    Heydorn, R. P.

    1984-01-01

    The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of he Earth from remotely sensed measurements of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inferences about the Earth. This report summarizes the progress that has been made toward this program goal by each of the principal investigators in the MPRIA Program.

  16. Comprehensive, powerful, efficient, intuitive: a new software framework for clinical imaging applications

    NASA Astrophysics Data System (ADS)

    Augustine, Kurt E.; Holmes, David R., III; Hanson, Dennis P.; Robb, Richard A.

    2006-03-01

    One of the greatest challenges for a software engineer is to create a complex application that is comprehensive enough to be useful to a diverse set of users, yet focused enough for individual tasks to be carried out efficiently with minimal training. This "powerful yet simple" paradox is particularly prevalent in advanced medical imaging applications. Recent research in the Biomedical Imaging Resource (BIR) at Mayo Clinic has been directed toward development of an imaging application framework that provides powerful image visualization/analysis tools in an intuitive, easy-to-use interface. It is based on two concepts very familiar to physicians - Cases and Workflows. Each case is associated with a unique patient and a specific set of routine clinical tasks, or a workflow. Each workflow is comprised of an ordered set of general-purpose modules which can be re-used for each unique workflow. Clinicians help describe and design the workflows, and then are provided with an intuitive interface to both patient data and analysis tools. Since most of the individual steps are common to many different workflows, the use of general-purpose modules reduces development time and results in applications that are consistent, stable, and robust. While the development of individual modules may reflect years of research by imaging scientists, new customized workflows based on the new modules can be developed extremely fast. If a powerful, comprehensive application is difficult to learn and complicated to use, it will be unacceptable to most clinicians. Clinical image analysis tools must be intuitive and effective or they simply will not be used.

  17. Accuracy Evaluation of a 3-Dimensional Surface Imaging System for Guidance in Deep-Inspiration Breath-Hold Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alderliesten, Tanja; Sonke, Jan-Jakob; Betgen, Anja

    2013-02-01

    Purpose: To investigate the applicability of 3-dimensional (3D) surface imaging for image guidance in deep-inspiration breath-hold radiation therapy (DIBH-RT) for patients with left-sided breast cancer. For this purpose, setup data based on captured 3D surfaces was compared with setup data based on cone beam computed tomography (CBCT). Methods and Materials: Twenty patients treated with DIBH-RT after breast-conserving surgery (BCS) were included. Before the start of treatment, each patient underwent a breath-hold CT scan for planning purposes. During treatment, dose delivery was preceded by setup verification using CBCT of the left breast. 3D surfaces were captured by a surface imaging systemmore » concurrently with the CBCT scan. Retrospectively, surface registrations were performed for CBCT to CT and for a captured 3D surface to CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors, group mean, systematic error, random error, and 95% limits of agreement were calculated. Furthermore, receiver operating characteristic (ROC) analysis was performed. Results: Good correlation between setup errors was found: R{sup 2}=0.70, 0.90, 0.82 in left-right, craniocaudal, and anterior-posterior directions, respectively. Systematic errors were {<=}0.17 cm in all directions. Random errors were {<=}0.15 cm. The limits of agreement were -0.34-0.48, -0.42-0.39, and -0.52-0.23 cm in left-right, craniocaudal, and anterior-posterior directions, respectively. ROC analysis showed that a threshold between 0.4 and 0.8 cm corresponds to promising true positive rates (0.78-0.95) and false positive rates (0.12-0.28). Conclusions: The results support the application of 3D surface imaging for image guidance in DIBH-RT after BCS.« less

  18. A Single-Institution Experience in Percutaneous Image-Guided Biopsy of Malignant Pleural Mesothelioma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welch, B. T., E-mail: Welch.brian@mayo.edu; Eiken, P. W.; Atwell, T. D.

    PurposeMesothelioma has been considered a difficult pathologic diagnosis to achieve via image-guided core needle biopsy. The purpose of this study was to assess the diagnostic sensitivity of percutaneous image-guided biopsy for diagnosis of pleural mesothelioma.Materials and MethodsRetrospective review was performed to identify patients with a confirmed diagnosis of pleural mesothelioma and who underwent image-guided needle biopsy between January 1, 2002, and January 1, 2016. Thirty-two patients with pleural mesothelioma were identified and included for analysis in 33 image-guided biopsy procedures. Patient, procedural, and pathologic characteristics were recorded. Complications were characterized via standardized nomenclature [Common Terminology for Clinically Adverse Events (CTCAE)].ResultsPercutaneousmore » image-guided biopsy was associated with an overall sensitivity of 81%. No CTCAE clinically significant complications were observed. No image-guided procedures were complicated by pneumothorax or necessitated chest tube placement. No patients had tumor seeding of the biopsy tract.ConclusionPercutaneous image-guided biopsy can achieve high sensitivity for pathologic diagnosis of pleural mesothelioma with a low procedural complication rate, potentially obviating need for surgical biopsy.« less

  19. Diagnostic Performance of Mammographic Texture Analysis in the Differential Diagnosis of Benign and Malignant Breast Tumors.

    PubMed

    Li, Zhiming; Yu, Lan; Wang, Xin; Yu, Haiyang; Gao, Yuanxiang; Ren, Yande; Wang, Gang; Zhou, Xiaoming

    2017-11-09

    The purpose of this study was to investigate the diagnostic performance of mammographic texture analysis in the differential diagnosis of benign and malignant breast tumors. Digital mammography images were obtained from the Picture Archiving and Communication System at our institute. Texture features of mammographic images were calculated. Mann-Whitney U test was used to identify differences between the benign and malignant group. The receiver operating characteristic (ROC) curve analysis was used to assess the diagnostic performance of texture features. Significant differences of texture features of histogram, gray-level co-occurrence matrix (GLCM) and run length matrix (RLM) were found between the benign and malignant breast group (P < .05). The area under the ROC (AUROC) of histogram, GLCM, and RLM were 0.800, 0.787, and 0.761, with no differences between them (P > .05). The AUROCs of imaging-based diagnosis, texture analysis, and imaging-based diagnosis combined with texture analysis were 0.873, 0.863, and 0.961, respectively. When imaging-based diagnosis was combined with texture analysis, the AUROC was higher than that of imaging-based diagnosis or texture analysis (P < .05). Mammographic texture analysis is a reliable technique for differential diagnosis of benign and malignant breast tumors. Furthermore, the combination of imaging-based diagnosis and texture analysis can significantly improve diagnostic performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Principle component analysis and linear discriminant analysis of multi-spectral autofluorescence imaging data for differentiating basal cell carcinoma and healthy skin

    NASA Astrophysics Data System (ADS)

    Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Lesnichaya, Anastasiya D.; Kudrin, Konstantin G.; Cherkasova, Olga P.; Kurlov, Vladimir N.; Shikunova, Irina A.; Perchik, Alexei V.; Yurchenko, Stanislav O.; Reshetov, Igor V.

    2016-09-01

    In present paper, an ability to differentiate basal cell carcinoma (BCC) and healthy skin by combining multi-spectral autofluorescence imaging, principle component analysis (PCA), and linear discriminant analysis (LDA) has been demonstrated. For this purpose, the experimental setup, which includes excitation and detection branches, has been assembled. The excitation branch utilizes a mercury arc lamp equipped with a 365-nm narrow-linewidth excitation filter, a beam homogenizer, and a mechanical chopper. The detection branch employs a set of bandpass filters with the central wavelength of spectral transparency of λ = 400, 450, 500, and 550 nm, and a digital camera. The setup has been used to study three samples of freshly excised BCC. PCA and LDA have been implemented to analyze the data of multi-spectral fluorescence imaging. Observed results of this pilot study highlight the advantages of proposed imaging technique for skin cancer diagnosis.

  1. Analysis of 3-D Tongue Motion from Tagged and Cine Magnetic Resonance Images

    ERIC Educational Resources Information Center

    Xing, Fangxu; Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.

    2016-01-01

    Purpose: Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during…

  2. Spatiotemporal Analysis of High-Speed Videolaryngoscopic Imaging of Organic Pathologies in Males

    ERIC Educational Resources Information Center

    Bohr, Christopher; Kräck, Angelika; Dubrovskiy Denis; Eysholdt, Ulrich; Svec, Jan; Psychogios, Georgios; Ziethe, Anke; Döllinger, Michael

    2014-01-01

    Purpose: The aim of this study was to identify parameters that would differentiate healthy from pathological organic-based vocal fold vibrations to emphasize clinical usefulness of high-speed imaging. Method: Fifty-five men (M age = 36 years, SD = 20 years) were examined and separated into 4 groups: 1 healthy (26 individuals) and 3 pathological…

  3. The Effects of Image and Animation in Enhancing Pedagogical Agent Persona

    ERIC Educational Resources Information Center

    Baylor, Amy L.; Ryu, Jeeheon

    2003-01-01

    The purpose of this experimental study was to test the role of image and animation on: a) learners' perceptions of pedagogical agent persona characteristics (i.e., extent to which agent was person-like, engaging, credible, and instructor-like); b) agent value; and c) performance. The primary analysis consisted of two contrast comparisons: 1)…

  4. Design Criteria For Networked Image Analysis System

    NASA Astrophysics Data System (ADS)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  5. Sensor image prediction techniques

    NASA Astrophysics Data System (ADS)

    Stenger, A. J.; Stone, W. R.; Berry, L.; Murray, T. J.

    1981-02-01

    The preparation of prediction imagery is a complex, costly, and time consuming process. Image prediction systems which produce a detailed replica of the image area require the extensive Defense Mapping Agency data base. The purpose of this study was to analyze the use of image predictions in order to determine whether a reduced set of more compact image features contains enough information to produce acceptable navigator performance. A job analysis of the navigator's mission tasks was performed. It showed that the cognitive and perceptual tasks he performs during navigation are identical to those performed for the targeting mission function. In addition, the results of the analysis of his performance when using a particular sensor can be extended to the analysis of this mission tasks using any sensor. An experimental approach was used to determine the relationship between navigator performance and the type of amount of information in the prediction image. A number of subjects were given image predictions containing varying levels of scene detail and different image features, and then asked to identify the predicted targets in corresponding dynamic flight sequences over scenes of cultural, terrain, and mixed (both cultural and terrain) content.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; University of Missouri, Columbia, MO; Chen, H

    Purpose: Local noise power spectrum (NPS) properties are significantly affected by calculation variables and CT acquisition and reconstruction parameters, but a thoughtful analysis of these effects is absent. In this study, we performed a complete analysis of the effects of calculation and imaging parameters on the NPS. Methods: The uniformity module of a Catphan phantom was scanned with a Philips Brilliance 64-slice CT simulator using various scanning protocols. Images were reconstructed using both FBP and iDose4 reconstruction algorithms. From these images, local NPS were calculated for regions of interest (ROI) of varying locations and sizes, using four image background removalmore » methods. Additionally, using a predetermined ground truth, NPS calculation accuracy for various calculation parameters was compared for computer simulated ROIs. A complete analysis of the effects of calculation, acquisition, and reconstruction parameters on the NPS was conducted. Results: The local NPS varied with ROI size and image background removal method, particularly at low spatial frequencies. The image subtraction method was the most accurate according to the computer simulation study, and was also the most effective at removing low frequency background components in the acquired data. However, first-order polynomial fitting using residual sum of squares and principle component analysis provided comparable accuracy under certain situations. Similar general trends were observed when comparing the NPS for FBP to that of iDose4 while varying other calculation and scanning parameters. However, while iDose4 reduces the noise magnitude compared to FBP, this reduction is spatial-frequency dependent, further affecting NPS variations at low spatial frequencies. Conclusion: The local NPS varies significantly depending on calculation parameters, image acquisition parameters, and reconstruction techniques. Appropriate local NPS calculation should be performed to capture spatial variations of noise; calculation methodology should be selected with consideration of image reconstruction effects and the desired purpose of CT simulation for radiotherapy tasks.« less

  7. Iris recognition and what is next? Iris diagnosis: a new challenging topic for machine vision from image acquisition to image interpretation

    NASA Astrophysics Data System (ADS)

    Perner, Petra

    2017-03-01

    Molecular image-based techniques are widely used in medicine to detect specific diseases. Look diagnosis is an important issue but also the analysis of the eye plays an important role in order to detect specific diseases. These topics are important topics in medicine and the standardization of these topics by an automatic system can be a new challenging field for machine vision. Compared to iris recognition has the iris diagnosis much more higher demands for the image acquisition and interpretation of the iris. One understands by iris diagnosis (Iridology) the investigation and analysis of the colored part of the eye, the iris, to discover factors, which play an important role for the prevention and treatment of illnesses, but also for the preservation of an optimum health. An automatic system would pave the way for a much wider use of the iris diagnosis for the diagnosis of illnesses and for the purpose of individual health protection. With this paper, we describe our work towards an automatic iris diagnosis system. We describe the image acquisition and the problems with it. Different ways are explained for image acquisition and image preprocessing. We describe the image analysis method for the detection of the iris. The meta-model for image interpretation is given. Based on this model we show the many tasks for image analysis that range from different image-object feature analysis, spatial image analysis to color image analysis. Our first results for the recognition of the iris are given. We describe how detecting the pupil and not wanted lamp spots. We explain how to recognize orange blue spots in the iris and match them against the topological map of the iris. Finally, we give an outlook for further work.

  8. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  9. [Central online quality assurance in radiology: an IT solution exemplified by the German Breast Cancer Screening Program].

    PubMed

    Czwoydzinski, J; Girnus, R; Sommer, A; Heindel, W; Lenzen, H

    2011-09-01

    Physical-technical quality assurance is one of the essential tasks of the National Reference Centers in the German Breast Cancer Screening Program. For this purpose the mammography units are required to transfer the measured values of the constancy tests on a daily basis and all phantom images created for this purpose on a weekly basis to the reference centers. This is a serious logistical challenge. To meet these requirements, we developed an innovative software tool. By the end of 2005, we had already developed web-based software (MammoControl) allowing the transmission of constancy test results via entry forms. For automatic analysis and transmission of the phantom images, we then introduced an extension (MammoControl DIANA). This was based on Java, Java Web Start, the NetBeans Rich Client Platform, the Pixelmed Java DICOM Toolkit and the ImageJ library. MammoControl DIANA was designed to run locally in the mammography units. This allows automated on-site image analysis. Both results and compressed images can then be transmitted to the reference center. We developed analysis modules for the daily and monthly consistency tests and additionally for a homogeneity test. The software we developed facilitates the immediate availability of measurement results, phantom images, and DICOM header data in all reference centers. This allows both targeted guidance and short response time in the case of errors. We achieved a consistent IT-based evaluation with standardized tools for the entire screening program in Germany. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Resolution analysis of archive films for the purpose of their optimal digitization and distribution

    NASA Astrophysics Data System (ADS)

    Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek

    2017-09-01

    With recent high demand for ultra-high-definition (UHD) content to be screened in high-end digital movie theaters but also in the home environment, film archives full of movies in high-definition and above are in the scope of UHD content providers. Movies captured with the traditional film technology represent a virtually unlimited source of UHD content. The goal to maintain complete image information is also related to the choice of scanning resolution and spatial resolution for further distribution. It might seem that scanning the film material in the highest possible resolution using state-of-the-art film scanners and also its distribution in this resolution is the right choice. The information content of the digitized images is however limited, and various degradations moreover lead to its further reduction. Digital distribution of the content in the highest image resolution might be therefore unnecessary or uneconomical. In other cases, the highest possible resolution is inevitable if we want to preserve fine scene details or film grain structure for archiving purposes. This paper deals with the image detail content analysis of archive film records. The resolution limit in captured scene image and factors which lower the final resolution are discussed. Methods are proposed to determine the spatial details of the film picture based on the analysis of its digitized image data. These procedures allow determining recommendations for optimal distribution of digitized video content intended for various display devices with lower resolutions. Obtained results are illustrated on spatial downsampling use case scenario, and performance evaluation of the proposed techniques is presented.

  11. BioImageXD: an open, general-purpose and high-throughput image-processing platform.

    PubMed

    Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J

    2012-06-28

    BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.

  12. Neural classifier in the estimation process of maturity of selected varieties of apples

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Piekarska-Boniecka, H.; Koszela, K.; Zaborowicz, M.; Przybył, K.; Wojcieszak, D.; Zbytek, Z.; Ludwiczak, A.; Przybylak, A.; Lewicki, A.

    2015-07-01

    This paper seeks to present methods of neural image analysis aimed at estimating the maturity state of selected varieties of apples which are popular in Poland. An identification of the degree of maturity of selected varieties of apples has been conducted on the basis of information encoded in graphical form, presented in the digital photos. The above process involves the application of the BBCH scale, used to determine the maturity of apples. The aforementioned scale is widely used in the EU and has been developed for many species of monocotyledonous plants and dicotyledonous plants. It is also worth noticing that the given scale enables detailed determinations of development stage of a given plant. The purpose of this work is to identify maturity level of selected varieties of apples, which is supported by the use of image analysis methods and classification techniques represented by artificial neural networks. The analysis of graphical representative features based on image analysis method enabled the assessment of the maturity of apples. For the utilitarian purpose the "JabVis 1.1" neural IT system was created, in accordance with requirements of the software engineering dedicated to support the decision-making processes occurring in broadly understood production process and processing of apples.

  13. Progress in analysis of computed tomography (CT) images of hardwood logs for defect detection

    Treesearch

    Erol Sarigul; A. Lynn Abbott; Daniel L. Schmoldt

    2003-01-01

    This paper addresses the problem of automatically detecting internal defects in logs using computed tomography (CT) images. The overall purpose is to assist in breakdown optimization. Several studies have shown that the commercial value of resulting boards can be increased substantially if defect locations are known in advance, and if this information is used to make...

  14. Reel Principals: A Descriptive Content Analysis of the Images of School Principals Depicted in Movies from 1997-2009

    ERIC Educational Resources Information Center

    Wolfrom, Katy J.

    2010-01-01

    According to Glanz's early research, school principals have been depicted as autocrats, bureaucrats, buffoons, and/or villains in movies from 1950 to 1996. The purpose of this study was to determine if these stereotypical characterizations of school principals have continued in films from 1997-2009, or if more favorable images have emerged that…

  15. Classification of Korla fragrant pears using NIR hyperspectral imaging analysis

    NASA Astrophysics Data System (ADS)

    Rao, Xiuqin; Yang, Chun-Chieh; Ying, Yibin; Kim, Moon S.; Chao, Kuanglin

    2012-05-01

    Korla fragrant pears are small oval pears characterized by light green skin, crisp texture, and a pleasant perfume for which they are named. Anatomically, the calyx of a fragrant pear may be either persistent or deciduous; the deciduouscalyx fruits are considered more desirable due to taste and texture attributes. Chinese packaging standards require that packed cases of fragrant pears contain 5% or less of the persistent-calyx type. Near-infrared hyperspectral imaging was investigated as a potential means for automated sorting of pears according to calyx type. Hyperspectral images spanning the 992-1681 nm region were acquired using an EMCCD-based laboratory line-scan imaging system. Analysis of the hyperspectral images was performed to select wavebands useful for identifying persistent-calyx fruits and for identifying deciduous-calyx fruits. Based on the selected wavebands, an image-processing algorithm was developed that targets automated classification of Korla fragrant pears into the two categories for packaging purposes.

  16. Automated image analysis of placental villi and syncytial knots in histological sections.

    PubMed

    Kidron, Debora; Vainer, Ifat; Fisher, Yael; Sharony, Reuven

    2017-05-01

    Delayed villous maturation and accelerated villous maturation diagnosed in histologic sections are morphologic manifestations of pathophysiological conditions. The inter-observer agreement among pathologists in assessing these conditions is moderate at best. We investigated whether automated image analysis of placental villi and syncytial knots could improve standardization in diagnosing these conditions. Placentas of antepartum fetal death at or near term were diagnosed as normal, delayed or accelerated villous maturation. Histologic sections of 5 cases per group were photographed at ×10 magnification. Automated image analysis of villi and syncytial knots was performed, using ImageJ public domain software. Analysis of hundreds of histologic images was carried out within minutes on a personal computer, using macro commands. Compared to normal placentas, villi from delayed maturation were larger and fewer, with fewer and smaller syncytial knots. Villi from accelerated maturation were smaller. The data were further analyzed according to horizontal placental zones and groups of villous size. Normal placentas can be discriminated from placentas of delayed or accelerated villous maturation using automated image analysis. Automated image analysis of villi and syncytial knots is not equivalent to interpretation by the human eye. Each method has advantages and disadvantages in assessing the 2-dimensional histologic sections representing the complex, 3-dimensional villous tree. Image analysis of placentas provides quantitative data that might help in standardizing and grading of placentas for diagnostic and research purposes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Bio-image warehouse system: concept and implementation of a diagnosis-based data warehouse for advanced imaging modalities in neuroradiology.

    PubMed

    Minati, L; Ghielmetti, F; Ciobanu, V; D'Incerti, L; Maccagnano, C; Bizzi, A; Bruzzone, M G

    2007-03-01

    Advanced neuroimaging techniques, such as functional magnetic resonance imaging (fMRI), chemical shift spectroscopy imaging (CSI), diffusion tensor imaging (DTI), and perfusion-weighted imaging (PWI) create novel challenges in terms of data storage and management: huge amounts of raw data are generated, the results of analysis may depend on the software and settings that have been used, and most often intermediate files are inherently not compliant with the current DICOM (digital imaging and communication in medicine) standard, as they contain multidimensional complex and tensor arrays and various other types of data structures. A software architecture, referred to as Bio-Image Warehouse System (BIWS), which can be used alongside a radiology information system/picture archiving and communication system (RIS/PACS) system to store neuroimaging data for research purposes, is presented. The system architecture is conceived with the purpose of enabling to query by diagnosis according to a predefined two-layered classification taxonomy. The operational impact of the system and the time needed to get acquainted with the web-based interface and with the taxonomy are found to be limited. The development of modules enabling automated creation of statistical templates is proposed.

  18. Internet (WWW) based system of ultrasonic image processing tools for remote image analysis.

    PubMed

    Zeng, Hong; Fei, Ding-Yu; Fu, Cai-Ting; Kraft, Kenneth A

    2003-07-01

    Ultrasonic Doppler color imaging can provide anatomic information and simultaneously render flow information within blood vessels for diagnostic purpose. Many researchers are currently developing ultrasound image processing algorithms in order to provide physicians with accurate clinical parameters from the images. Because researchers use a variety of computer languages and work on different computer platforms to implement their algorithms, it is difficult for other researchers and physicians to access those programs. A system has been developed using World Wide Web (WWW) technologies and HTTP communication protocols to publish our ultrasonic Angle Independent Doppler Color Image (AIDCI) processing algorithm and several general measurement tools on the Internet, where authorized researchers and physicians can easily access the program using web browsers to carry out remote analysis of their local ultrasonic images or images provided from the database. In order to overcome potential incompatibility between programs and users' computer platforms, ActiveX technology was used in this project. The technique developed may also be used for other research fields.

  19. Using Cell-ID 1.4 with R for Microscope-Based Cytometry

    PubMed Central

    Bush, Alan; Chernomoretz, Ariel; Yu, Richard; Gordon, Andrew

    2012-01-01

    This unit describes a method for quantifying various cellular features (e.g., volume, total and subcellular fluorescence localization) from sets of microscope images of individual cells. It includes procedures for tracking cells over time. One purposefully defocused transmission image (sometimes referred to as bright-field or BF) is acquired to segment the image and locate each cell. Fluorescent images (one for each of the color channels to be analyzed) are then acquired by conventional wide-field epifluorescence or confocal microscopy. This method uses the image processing capabilities of Cell-ID (Gordon et al., 2007, as updated here) and data analysis by the statistical programming framework R (R-Development-Team, 2008), which we have supplemented with a package of routines for analyzing Cell-ID output. Both Cell-ID and the analysis package are open-source. PMID:23026908

  20. Analyzing microtomography data with Python and the scikit-image library.

    PubMed

    Gouillart, Emmanuelle; Nunez-Iglesias, Juan; van der Walt, Stéfan

    2017-01-01

    The exploration and processing of images is a vital aspect of the scientific workflows of many X-ray imaging modalities. Users require tools that combine interactivity, versatility, and performance. scikit-image is an open-source image processing toolkit for the Python language that supports a large variety of file formats and is compatible with 2D and 3D images. The toolkit exposes a simple programming interface, with thematic modules grouping functions according to their purpose, such as image restoration, segmentation, and measurements. scikit-image users benefit from a rich scientific Python ecosystem that contains many powerful libraries for tasks such as visualization or machine learning. scikit-image combines a gentle learning curve, versatile image processing capabilities, and the scalable performance required for the high-throughput analysis of X-ray imaging data.

  1. Clinical image quality evaluation for panoramic radiography in Korean dental clinics

    PubMed Central

    Choi, Bo-Ram; Choi, Da-Hye; Huh, Kyung-Hoe; Yi, Won-Jin; Heo, Min-Suk; Choi, Soon-Chul; Bae, Kwang-Hak

    2012-01-01

    Purpose The purpose of this study was to investigate the level of clinical image quality of panoramic radiographs and to analyze the parameters that influence the overall image quality. Materials and Methods Korean dental clinics were asked to provide three randomly selected panoramic radiographs. An oral and maxillofacial radiology specialist evaluated those images using our self-developed Clinical Image Quality Evaluation Chart. Three evaluators classified the overall image quality of the panoramic radiographs and evaluated the causes of imaging errors. Results A total of 297 panoramic radiographs were collected from 99 dental hospitals and clinics. The mean of the scores according to the Clinical Image Quality Evaluation Chart was 79.9. In the classification of the overall image quality, 17 images were deemed 'optimal for obtaining diagnostic information,' 153 were 'adequate for diagnosis,' 109 were 'poor but diagnosable,' and nine were 'unrecognizable and too poor for diagnosis'. The results of the analysis of the causes of the errors in all the images are as follows: 139 errors in the positioning, 135 in the processing, 50 from the radiographic unit, and 13 due to anatomic abnormality. Conclusion Panoramic radiographs taken at local dental clinics generally have a normal or higher-level image quality. Principal factors affecting image quality were positioning of the patient and image density, sharpness, and contrast. Therefore, when images are taken, the patient position should be adjusted with great care. Also, standardizing objective criteria of image density, sharpness, and contrast is required to evaluate image quality effectively. PMID:23071969

  2. Neural image analysis for estimating aerobic and anaerobic decomposition of organic matter based on the example of straw decomposition

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Nowakowski, K.; Slosarz, P.; Dach, J.; Pilarski, K.

    2012-04-01

    The purpose of the project was to identify the degree of organic matter decomposition by means of a neural model based on graphical information derived from image analysis. Empirical data (photographs of compost content at various stages of maturation) were used to generate an optimal neural classifier (Boniecki et al. 2009, Nowakowski et al. 2009). The best classification properties were found in an RBF (Radial Basis Function) artificial neural network, which demonstrates that the process is non-linear.

  3. Content-addressable read/write memories for image analysis

    NASA Technical Reports Server (NTRS)

    Snyder, W. E.; Savage, C. D.

    1982-01-01

    The commonly encountered image analysis problems of region labeling and clustering are found to be cases of search-and-rename problem which can be solved in parallel by a system architecture that is inherently suitable for VLSI implementation. This architecture is a novel form of content-addressable memory (CAM) which provides parallel search and update functions, allowing speed reductions down to constant time per operation. It has been proposed in related investigations by Hall (1981) that, with VLSI, CAM-based structures with enhanced instruction sets for general purpose processing will be feasible.

  4. Information extraction from multivariate images

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kegley, K. A.; Schiess, J. R.

    1986-01-01

    An overview of several multivariate image processing techniques is presented, with emphasis on techniques based upon the principal component transformation (PCT). Multiimages in various formats have a multivariate pixel value, associated with each pixel location, which has been scaled and quantized into a gray level vector, and the bivariate of the extent to which two images are correlated. The PCT of a multiimage decorrelates the multiimage to reduce its dimensionality and reveal its intercomponent dependencies if some off-diagonal elements are not small, and for the purposes of display the principal component images must be postprocessed into multiimage format. The principal component analysis of a multiimage is a statistical analysis based upon the PCT whose primary application is to determine the intrinsic component dimensionality of the multiimage. Computational considerations are also discussed.

  5. [Myocardial perfusion scintigraphy - short form of the German guideline].

    PubMed

    Lindner, O; Burchert, W; Hacker, M; Schaefer, W; Schmidt, M; Schober, O; Schwaiger, M; vom Dahl, J; Zimmermann, R; Schäfers, M

    2013-01-01

    This guideline is a short summary of the guideline for myocardial perfusion scintigraphy published by the Association of the Scientific Medical Societies in Ger-many (AWMF). The purpose of this guideline is to provide practical assistance for indication and examination procedures as well as image analysis and to present the state-of-the-art of myocardial-perfusion-scintigraphy. After a short introduction on the fundamentals of imaging, precise and detailed information is given on the indications, patient preparation, stress testing, radiopharmaceuticals, examination protocols and techniques, radiation exposure, data reconstruction as well as information on visual and quantitative image analysis and interpretation. In addition possible pitfalls, artefacts and key elements of reporting are described.

  6. A concept for holistic whole body MRI data analysis, Imiomics

    PubMed Central

    Malmberg, Filip; Johansson, Lars; Lind, Lars; Sundbom, Magnus; Ahlström, Håkan; Kullberg, Joel

    2017-01-01

    Purpose To present and evaluate a whole-body image analysis concept, Imiomics (imaging–omics) and an image registration method that enables Imiomics analyses by deforming all image data to a common coordinate system, so that the information in each voxel can be compared between persons or within a person over time and integrated with non-imaging data. Methods The presented image registration method utilizes relative elasticity constraints of different tissue obtained from whole-body water-fat MRI. The registration method is evaluated by inverse consistency and Dice coefficients and the Imiomics concept is evaluated by example analyses of importance for metabolic research using non-imaging parameters where we know what to expect. The example analyses include whole body imaging atlas creation, anomaly detection, and cross-sectional and longitudinal analysis. Results The image registration method evaluation on 128 subjects shows low inverse consistency errors and high Dice coefficients. Also, the statistical atlas with fat content intensity values shows low standard deviation values, indicating successful deformations to the common coordinate system. The example analyses show expected associations and correlations which agree with explicit measurements, and thereby illustrate the usefulness of the proposed Imiomics concept. Conclusions The registration method is well-suited for Imiomics analyses, which enable analyses of relationships to non-imaging data, e.g. clinical data, in new types of holistic targeted and untargeted big-data analysis. PMID:28241015

  7. The ImageJ ecosystem: an open platform for biomedical image analysis

    PubMed Central

    Schindelin, Johannes; Rueden, Curtis T.; Hiner, Mark C.; Eliceiri, Kevin W.

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available – from commercial to academic, special-purpose to Swiss army knife, small to large–but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts life science, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. PMID:26153368

  8. The ImageJ ecosystem: An open platform for biomedical image analysis.

    PubMed

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. © 2015 Wiley Periodicals, Inc.

  9. ROBOSIGHT: Robotic Vision System For Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh

    1989-02-01

    Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.

  10. Optical Coherence Tomography in the UK Biobank Study – Rapid Automated Analysis of Retinal Thickness for Large Population-Based Studies

    PubMed Central

    Grossi, Carlota M.; Foster, Paul J.; Yang, Qi; Reisman, Charles A.; Chan, Kinpui; Peto, Tunde; Thomas, Dhanes; Patel, Praveen J.

    2016-01-01

    Purpose To describe an approach to the use of optical coherence tomography (OCT) imaging in large, population-based studies, including methods for OCT image acquisition, storage, and the remote, rapid, automated analysis of retinal thickness. Methods In UK Biobank, OCT images were acquired between 2009 and 2010 using a commercially available “spectral domain” OCT device (3D OCT-1000, Topcon). Images were obtained using a raster scan protocol, 6 mm x 6 mm in area, and consisting of 128 B-scans. OCT image sets were stored on UK Biobank servers in a central repository, adjacent to high performance computers. Rapid, automated analysis of retinal thickness was performed using custom image segmentation software developed by the Topcon Advanced Biomedical Imaging Laboratory (TABIL). This software employs dual-scale gradient information to allow for automated segmentation of nine intraretinal boundaries in a rapid fashion. Results 67,321 participants (134,642 eyes) in UK Biobank underwent OCT imaging of both eyes as part of the ocular module. 134,611 images were successfully processed with 31 images failing segmentation analysis due to corrupted OCT files or withdrawal of subject consent for UKBB study participation. Average time taken to call up an image from the database and complete segmentation analysis was approximately 120 seconds per data set per login, and analysis of the entire dataset was completed in approximately 28 days. Conclusions We report an approach to the rapid, automated measurement of retinal thickness from nearly 140,000 OCT image sets from the UK Biobank. In the near future, these measurements will be publically available for utilization by researchers around the world, and thus for correlation with the wealth of other data collected in UK Biobank. The automated analysis approaches we describe may be of utility for future large population-based epidemiological studies, clinical trials, and screening programs that employ OCT imaging. PMID:27716837

  11. Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.

    2016-02-15

    Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less

  12. Evaluation of orthognathic surgery on articular disc position and temporomandibular joint symptoms in skeletal class II patients: A Magnetic Resonance Imaging study.

    PubMed

    Firoozei, Gholamreza; Shahnaseri, Shirin; Momeni, Hasan; Soltani, Parisa

    2017-08-01

    The purpose of orthognathic surgery is to correct facial deformity and dental malocclusion and to obtain normal orofacial function. However, there are controversies of whether orthognathic surgery might have any negative influence on temporomandibular (TM) joint. The purpose of this study was to evaluate the influence of orthognathic surgery on articular disc position and temporomandibular joint symptoms of skeletal CI II patients by means of magnetic resonance imaging. For this purpose, fifteen patients with skeletal CI II malocclusion, aged 19-32 years (mean 23 years), 10 women and 5 men, from the Isfahan Department of Oral and Maxillofacial Surgery were studied. All received LeFort I and bilateral sagittal split osteotomy (BSSO) osteotomies and all patients received pre- and post-surgical orthodontic treatment. Magnetic resonance imaging was performed 1 day preoperatively and 3 month postoperatively. Descriptive statistics and Wilcoxon and Mc-Nemar tests were used for statistical analysis. P <0.05 was considered significant. Disc position ranged between 4.25 and 8.09 prior to surgery (mean=5.74±1.21). After surgery disc position range was 4.36 to 7.40 (mean=5.65±1.06). Statistical analysis proved that although TM disc tended to move anteriorly after BSSO surgery, this difference was not statistically significant ( p value<0.05). The findings of the present study revealed that orthognathic surgery does not alter the disc and condyle relationship. Therefore, it has minimal effects on intact and functional TM joint. Key words: Orthognathic surgery, skeletal class 2, magnetic resonance imaging, temporomandibular disc.

  13. A comparison of autonomous techniques for multispectral image analysis and classification

    NASA Astrophysics Data System (ADS)

    Valdiviezo-N., Juan C.; Urcid, Gonzalo; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso

    2012-10-01

    Multispectral imaging has given place to important applications related to classification and identification of objects from a scene. Because of multispectral instruments can be used to estimate the reflectance of materials in the scene, these techniques constitute fundamental tools for materials analysis and quality control. During the last years, a variety of algorithms has been developed to work with multispectral data, whose main purpose has been to perform the correct classification of the objects in the scene. The present study introduces a brief review of some classical as well as a novel technique that have been used for such purposes. The use of principal component analysis and K-means clustering techniques as important classification algorithms is here discussed. Moreover, a recent method based on the min-W and max-M lattice auto-associative memories, that was proposed for endmember determination in hyperspectral imagery, is introduced as a classification method. Besides a discussion of their mathematical foundation, we emphasize their main characteristics and the results achieved for two exemplar images conformed by objects similar in appearance, but spectrally different. The classification results state that the first components computed from principal component analysis can be used to highlight areas with different spectral characteristics. In addition, the use of lattice auto-associative memories provides good results for materials classification even in the cases where some spectral similarities appears in their spectral responses.

  14. Developing Matlab scripts for image analysis and quality assessment

    NASA Astrophysics Data System (ADS)

    Vaiopoulos, A. D.

    2011-11-01

    Image processing is a very helpful tool in many fields of modern sciences that involve digital imaging examination and interpretation. Processed images however, often need to be correlated with the original image, in order to ensure that the resulting image fulfills its purpose. Aside from the visual examination, which is mandatory, image quality indices (such as correlation coefficient, entropy and others) are very useful, when deciding which processed image is the most satisfactory. For this reason, a single program (script) was written in Matlab language, which automatically calculates eight indices by utilizing eight respective functions (independent function scripts). The program was tested in both fused hyperspectral (Hyperion-ALI) and multispectral (ALI, Landsat) imagery and proved to be efficient. Indices were found to be in agreement with visual examination and statistical observations.

  15. Integrated change detection and temporal trajectory analysis of coastal wetlands using high spatial resolution Korean Multi-Purpose Satellite series imagery

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoang Hai; Tran, Hien; Sunwoo, Wooyeon; Yi, Jong-hyuk; Kim, Dongkyun; Choi, Minha

    2017-04-01

    A series of multispectral high-resolution Korean Multi-Purpose Satellite (KOMPSAT) images was used to detect the geographical changes in four different tidal flats between the Yellow Sea and the west coast of South Korea. The method of unsupervised classification was used to generate a series of land use/land cover (LULC) maps from satellite images, which were then used as input for temporal trajectory analysis to detect the temporal change of coastal wetlands and its association with natural and anthropogenic activities. The accurately classified LULC maps of KOMPSAT images, with overall accuracy ranging from 83.34% to 95.43%, indicate that these multispectral high-resolution satellite data are highly applicable to the generation of high-quality thematic maps for extracting wetlands. The result of the trajectory analysis showed that, while the variation of the tidal flats in the Gyeonggi and Jeollabuk provinces was well correlated with the regular tidal regimes, the reductive trajectory of the wetland areas belonging to the Saemangeum province was caused by a high degree of human-induced activities including large reclamation and urbanization. The conservation of the Jeungdo Wetland Protected Area in the Jeollanam province revealed that effective social and environmental policies could help in protecting coastal wetlands from degradation.

  16. Is the spatial distribution of brain lesions associated with closed-head injury predictive of subsequent development of attention-deficit/hyperactivity disorder? Analysis with brain-image database

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.; Megalooikonomou, V.; Davatzikos, C.; Chen, A.; Bryan, R. N.; Gerring, J. P.

    1999-01-01

    PURPOSE: To determine whether there is an association between the spatial distribution of lesions detected at magnetic resonance (MR) imaging of the brain in children after closed-head injury and the development of secondary attention-deficit/hyperactivity disorder (ADHD). MATERIALS AND METHODS: Data obtained from 76 children without prior history of ADHD were analyzed. MR images were obtained 3 months after closed-head injury. After manual delineation of lesions, images were registered to the Talairach coordinate system. For each subject, registered images and secondary ADHD status were integrated into a brain-image database, which contains depiction (visualization) and statistical analysis software. Using this database, we assessed visually the spatial distributions of lesions and performed statistical analysis of image and clinical variables. RESULTS: Of the 76 children, 15 developed secondary ADHD. Depiction of the data suggested that children who developed secondary ADHD had more lesions in the right putamen than children who did not develop secondary ADHD; this impression was confirmed statistically. After Bonferroni correction, we could not demonstrate significant differences between secondary ADHD status and lesion burdens for the right caudate nucleus or the right globus pallidus. CONCLUSION: Closed-head injury-induced lesions in the right putamen in children are associated with subsequent development of secondary ADHD. Depiction software is useful in guiding statistical analysis of image data.

  17. Dermatological Feasibility of Multimodal Facial Color Imaging Modality for Cross-Evaluation of Facial Actinic Keratosis

    PubMed Central

    Bae, Youngwoo; Son, Taeyoon; Nelson, J. Stuart; Kim, Jae-Hong; Choi, Eung Ho; Jung, Byungjo

    2010-01-01

    Background/Purpose Digital color image analysis is currently considered as a routine procedure in dermatology. In our previous study, a multimodal facial color imaging modality (MFCIM), which provides a conventional, parallel- and cross-polarization, and fluorescent color image, was introduced for objective evaluation of various facial skin lesions. This study introduces a commercial version of MFCIM, DermaVision-PRO, for routine clinical use in dermatology and demonstrates its dermatological feasibility for cross-evaluation of skin lesions. Methods/Results Sample images of subjects with actinic keratosis or non-melanoma skin cancers were obtained at four different imaging modes. Various image analysis methods were applied to cross-evaluate the skin lesion and, finally, extract valuable diagnostic information. DermaVision-PRO is potentially a useful tool as an objective macroscopic imaging modality for quick prescreening and cross-evaluation of facial skin lesions. Conclusion DermaVision-PRO may be utilized as a useful tool for cross-evaluation of widely distributed facial skin lesions and an efficient database management of patient information. PMID:20923462

  18. Reducing noise component on medical images

    NASA Astrophysics Data System (ADS)

    Semenishchev, Evgeny; Voronin, Viacheslav; Dub, Vladimir; Balabaeva, Oksana

    2018-04-01

    Medical visualization and analysis of medical data is an actual direction. Medical images are used in microbiology, genetics, roentgenology, oncology, surgery, ophthalmology, etc. Initial data processing is a major step towards obtaining a good diagnostic result. The paper considers the approach allows an image filtering with preservation of objects borders. The algorithm proposed in this paper is based on sequential data processing. At the first stage, local areas are determined, for this purpose the method of threshold processing, as well as the classical ICI algorithm, is applied. The second stage uses a method based on based on two criteria, namely, L2 norm and the first order square difference. To preserve the boundaries of objects, we will process the transition boundary and local neighborhood the filtering algorithm with a fixed-coefficient. For example, reconstructed images of CT, x-ray, and microbiological studies are shown. The test images show the effectiveness of the proposed algorithm. This shows the applicability of analysis many medical imaging applications.

  19. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  20. Generating land cover boundaries from remotely sensed data using object-based image analysis: overview and epidemiological application

    PubMed Central

    Maxwell, Susan K.

    2010-01-01

    Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. PMID:21135917

  1. Noise Gating Solar Images

    NASA Astrophysics Data System (ADS)

    DeForest, Craig; Seaton, Daniel B.; Darnell, John A.

    2017-08-01

    I present and demonstrate a new, general purpose post-processing technique, "3D noise gating", that can reduce image noise by an order of magnitude or more without effective loss of spatial or temporal resolution in typical solar applications.Nearly all scientific images are, ultimately, limited by noise. Noise can be direct Poisson "shot noise" from photon counting effects, or introduced by other means such as detector read noise. Noise is typically represented as a random variable (perhaps with location- or image-dependent characteristics) that is sampled once per pixel or once per resolution element of an image sequence. Noise limits many aspects of image analysis, including photometry, spatiotemporal resolution, feature identification, morphology extraction, and background modeling and separation.Identifying and separating noise from image signal is difficult. The common practice of blurring in space and/or time works because most image "signal" is concentrated in the low Fourier components of an image, while noise is evenly distributed. Blurring in space and/or time attenuates the high spatial and temporal frequencies, reducing noise at the expense of also attenuating image detail. Noise-gating exploits the same property -- "coherence" -- that we use to identify features in images, to separate image features from noise.Processing image sequences through 3-D noise gating results in spectacular (more than 10x) improvements in signal-to-noise ratio, while not blurring bright, resolved features in either space or time. This improves most types of image analysis, including feature identification, time sequence extraction, absolute and relative photometry (including differential emission measure analysis), feature tracking, computer vision, correlation tracking, background modeling, cross-scale analysis, visual display/presentation, and image compression.I will introduce noise gating, describe the method, and show examples from several instruments (including SDO/AIA , SDO/HMI, STEREO/SECCHI, and GOES-R/SUVI) that explore the benefits and limits of the technique.

  2. Wavelet analysis of polarization azimuths maps for laser images of myocardial tissue for the purpose of diagnosing acute coronary insufficiency

    NASA Astrophysics Data System (ADS)

    Wanchuliak, O. Ya.; Peresunko, A. P.; Bakko, Bouzan Adel; Kushnerick, L. Ya.

    2011-09-01

    This paper presents the foundations of a large scale - localized wavelet - polarization analysis - inhomogeneous laser images of histological sections of myocardial tissue. Opportunities were identified defining relations between the structures of wavelet coefficients and causes of death. The optical model of polycrystalline networks of myocardium protein fibrils is presented. The technique of determining the coordinate distribution of polarization azimuth of the points of laser images of myocardium histological sections is suggested. The results of investigating the interrelation between the values of statistical (statistical moments of the 1st-4th order) parameters are presented which characterize distributions of wavelet - coefficients polarization maps of myocardium layers and death reasons.

  3. Contour Detection and Completion for Inpainting and Segmentation Based on Topological Gradient and Fast Marching Algorithms

    PubMed Central

    Auroux, Didier; Cohen, Laurent D.; Masmoudi, Mohamed

    2011-01-01

    We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734

  4. Multispectral UV imaging for fast and non-destructive quality control of chemical and physical tablet attributes.

    PubMed

    Klukkert, Marten; Wu, Jian X; Rantanen, Jukka; Carstensen, Jens M; Rades, Thomas; Leopold, Claudia S

    2016-07-30

    Monitoring of tablet quality attributes in direct vicinity of the production process requires analytical techniques that allow fast, non-destructive, and accurate tablet characterization. The overall objective of this study was to investigate the applicability of multispectral UV imaging as a reliable, rapid technique for estimation of the tablet API content and tablet hardness, as well as determination of tablet intactness and the tablet surface density profile. One of the aims was to establish an image analysis approach based on multivariate image analysis and pattern recognition to evaluate the potential of UV imaging for automatized quality control of tablets with respect to their intactness and surface density profile. Various tablets of different composition and different quality regarding their API content, radial tensile strength, intactness, and surface density profile were prepared using an eccentric as well as a rotary tablet press at compression pressures from 20MPa up to 410MPa. It was found, that UV imaging can provide both, relevant information on chemical and physical tablet attributes. The tablet API content and radial tensile strength could be estimated by UV imaging combined with partial least squares analysis. Furthermore, an image analysis routine was developed and successfully applied to the UV images that provided qualitative information on physical tablet surface properties such as intactness and surface density profiles, as well as quantitative information on variations in the surface density. In conclusion, this study demonstrates that UV imaging combined with image analysis is an effective and non-destructive method to determine chemical and physical quality attributes of tablets and is a promising approach for (near) real-time monitoring of the tablet compaction process and formulation optimization purposes. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. On-line 3-dimensional confocal imaging in vivo.

    PubMed

    Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M

    2000-09-01

    In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.

  6. Quantitative analysis of rib movement based on dynamic chest bone images: preliminary results

    NASA Astrophysics Data System (ADS)

    Tanaka, R.; Sanada, S.; Oda, M.; Mitsutaka, M.; Suzuki, K.; Sakuta, K.; Kawashima, H.

    2014-03-01

    Rib movement during respiration is one of the diagnostic criteria in pulmonary impairments. In general, the rib movement is assessed in fluoroscopy. However, the shadows of lung vessels and bronchi overlapping ribs prevent accurate quantitative analysis of rib movement. Recently, an image-processing technique for separating bones from soft tissue in static chest radiographs, called "bone suppression technique", has been developed. Our purpose in this study was to evaluate the usefulness of dynamic bone images created by the bone suppression technique in quantitative analysis of rib movement. Dynamic chest radiographs of 10 patients were obtained using a dynamic flat-panel detector (FPD). Bone suppression technique based on a massive-training artificial neural network (MTANN) was applied to the dynamic chest images to create bone images. Velocity vectors were measured in local areas on the dynamic bone images, which formed a map. The velocity maps obtained with bone and original images for scoliosis and normal cases were compared to assess the advantages of bone images. With dynamic bone images, we were able to quantify and distinguish movements of ribs from those of other lung structures accurately. Limited rib movements of scoliosis patients appeared as reduced rib velocity vectors. Vector maps in all normal cases exhibited left-right symmetric distributions, whereas those in abnormal cases showed nonuniform distributions. In conclusion, dynamic bone images were useful for accurate quantitative analysis of rib movements: Limited rib movements were indicated as a reduction of rib movement and left-right asymmetric distribution on vector maps. Thus, dynamic bone images can be a new diagnostic tool for quantitative analysis of rib movements without additional radiation dose.

  7. Plaque echodensity and textural features are associated with histologic carotid plaque instability.

    PubMed

    Doonan, Robert J; Gorgui, Jessica; Veinot, Jean P; Lai, Chi; Kyriacou, Efthyvoulos; Corriveau, Marc M; Steinmetz, Oren K; Daskalopoulou, Stella S

    2016-09-01

    Carotid plaque echodensity and texture features predict cerebrovascular symptomatology. Our purpose was to determine the association of echodensity and textural features obtained from a digital image analysis (DIA) program with histologic features of plaque instability as well as to identify the specific morphologic characteristics of unstable plaques. Patients scheduled to undergo carotid endarterectomy were recruited and underwent carotid ultrasound imaging. DIA was performed to extract echodensity and textural features using Plaque Texture Analysis software (LifeQ Medical Ltd, Nicosia, Cyprus). Carotid plaque surgical specimens were obtained and analyzed histologically. Principal component analysis (PCA) was performed to reduce imaging variables. Logistic regression models were used to determine if PCA variables and individual imaging variables predicted histologic features of plaque instability. Image analysis data from 160 patients were analyzed. Individual imaging features of plaque echolucency and homogeneity were associated with a more unstable plaque phenotype on histology. These results were independent of age, sex, and degree of carotid stenosis. PCA reduced 39 individual imaging variables to five PCA variables. PCA1 and PCA2 were significantly associated with overall plaque instability on histology (both P = .02), whereas PCA3 did not achieve statistical significance (P = .07). DIA features of carotid plaques are associated with histologic plaque instability as assessed by multiple histologic features. Importantly, unstable plaques on histology appear more echolucent and homogeneous on ultrasound imaging. These results are independent of stenosis, suggesting that image analysis may have a role in refining the selection of patients who undergo carotid endarterectomy. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  8. Recent Advances of Malaria Parasites Detection Systems Based on Mathematical Morphology

    PubMed Central

    Di Ruberto, Cecilia; Kocher, Michel

    2018-01-01

    Malaria is an epidemic health disease and a rapid, accurate diagnosis is necessary for proper intervention. Generally, pathologists visually examine blood stained slides for malaria diagnosis. Nevertheless, this kind of visual inspection is subjective, error-prone and time-consuming. In order to overcome the issues, numerous methods of automatic malaria diagnosis have been proposed so far. In particular, many researchers have used mathematical morphology as a powerful tool for computer aided malaria detection and classification. Mathematical morphology is not only a theory for the analysis of spatial structures, but also a very powerful technique widely used for image processing purposes and employed successfully in biomedical image analysis, especially in preprocessing and segmentation tasks. Microscopic image analysis and particularly malaria detection and classification can greatly benefit from the use of morphological operators. The aim of this paper is to present a review of recent mathematical morphology based methods for malaria parasite detection and identification in stained blood smears images. PMID:29419781

  9. Recent Advances of Malaria Parasites Detection Systems Based on Mathematical Morphology.

    PubMed

    Loddo, Andrea; Di Ruberto, Cecilia; Kocher, Michel

    2018-02-08

    Malaria is an epidemic health disease and a rapid, accurate diagnosis is necessary for proper intervention. Generally, pathologists visually examine blood stained slides for malaria diagnosis. Nevertheless, this kind of visual inspection is subjective, error-prone and time-consuming. In order to overcome the issues, numerous methods of automatic malaria diagnosis have been proposed so far. In particular, many researchers have used mathematical morphology as a powerful tool for computer aided malaria detection and classification. Mathematical morphology is not only a theory for the analysis of spatial structures, but also a very powerful technique widely used for image processing purposes and employed successfully in biomedical image analysis, especially in preprocessing and segmentation tasks. Microscopic image analysis and particularly malaria detection and classification can greatly benefit from the use of morphological operators. The aim of this paper is to present a review of recent mathematical morphology based methods for malaria parasite detection and identification in stained blood smears images.

  10. Heterogeneous Optimization Framework: Reproducible Preprocessing of Multi-Spectral Clinical MRI for Neuro-Oncology Imaging Research.

    PubMed

    Milchenko, Mikhail; Snyder, Abraham Z; LaMontagne, Pamela; Shimony, Joshua S; Benzinger, Tammie L; Fouke, Sarah Jost; Marcus, Daniel S

    2016-07-01

    Neuroimaging research often relies on clinically acquired magnetic resonance imaging (MRI) datasets that can originate from multiple institutions. Such datasets are characterized by high heterogeneity of modalities and variability of sequence parameters. This heterogeneity complicates the automation of image processing tasks such as spatial co-registration and physiological or functional image analysis. Given this heterogeneity, conventional processing workflows developed for research purposes are not optimal for clinical data. In this work, we describe an approach called Heterogeneous Optimization Framework (HOF) for developing image analysis pipelines that can handle the high degree of clinical data non-uniformity. HOF provides a set of guidelines for configuration, algorithm development, deployment, interpretation of results and quality control for such pipelines. At each step, we illustrate the HOF approach using the implementation of an automated pipeline for Multimodal Glioma Analysis (MGA) as an example. The MGA pipeline computes tissue diffusion characteristics of diffusion tensor imaging (DTI) acquisitions, hemodynamic characteristics using a perfusion model of susceptibility contrast (DSC) MRI, and spatial cross-modal co-registration of available anatomical, physiological and derived patient images. Developing MGA within HOF enabled the processing of neuro-oncology MR imaging studies to be fully automated. MGA has been successfully used to analyze over 160 clinical tumor studies to date within several research projects. Introduction of the MGA pipeline improved image processing throughput and, most importantly, effectively produced co-registered datasets that were suitable for advanced analysis despite high heterogeneity in acquisition protocols.

  11. GPU-based prompt gamma ray imaging from boron neutron capture therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Do-Kun; Jung, Joo-Young; Suk Suh, Tae, E-mail: suhsanta@catholic.ac.kr

    Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusions: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.« less

  12. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    PubMed Central

    Cho, Nam-Hoon; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system. PMID:25371701

  13. Features and limitations of mobile tablet devices for viewing radiological images.

    PubMed

    Grunert, J H

    2015-03-01

    Mobile radiological image display systems are becoming increasingly common, necessitating a comparison of the features of these systems, specifically the operating system employed, connection to stationary PACS, data security and rang of image display and image analysis functions. In the fall of 2013, a total of 17 PACS suppliers were surveyed regarding the technical features of 18 mobile radiological image display systems using a standardized questionnaire. The study also examined to what extent the technical specifications of the mobile image display systems satisfy the provisions of the Germany Medical Devices Act as well as the provisions of the German X-ray ordinance (RöV). There are clear differences in terms of how the mobile systems connected to the stationary PACS. Web-based solutions allow the mobile image display systems to function independently of their operating systems. The examined systems differed very little in terms of image display and image analysis functions. Mobile image display systems complement stationary PACS and can be used to view images. The impacts of the new quality assurance guidelines (QS-RL) as well as the upcoming new standard DIN 6868 - 157 on the acceptance testing of mobile image display units for the purpose of image evaluation are discussed. © Georg Thieme Verlag KG Stuttgart · New York.

  14. microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling

    NASA Astrophysics Data System (ADS)

    Comi, Troy J.; Neumann, Elizabeth K.; Do, Thanh D.; Sweedler, Jonathan V.

    2017-09-01

    Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. [Figure not available: see fulltext.

  15. microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling.

    PubMed

    Comi, Troy J; Neumann, Elizabeth K; Do, Thanh D; Sweedler, Jonathan V

    2017-09-01

    Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. Graphical Abstract ᅟ.

  16. Quantitative analysis of image quality for acceptance and commissioning of an MRI simulator with a semiautomatic method.

    PubMed

    Chen, Xinyuan; Dai, Jianrong

    2018-05-01

    Magnetic Resonance Imaging (MRI) simulation differs from diagnostic MRI in purpose, technical requirements, and implementation. We propose a semiautomatic method for image acceptance and commissioning for the scanner, the radiofrequency (RF) coils, and pulse sequences for an MRI simulator. The ACR MRI accreditation large phantom was used for image quality analysis with seven parameters. Standard ACR sequences with a split head coil were adopted to examine the scanner's basic performance. The performance of simulation RF coils were measured and compared using the standard sequence with different clinical diagnostic coils. We used simulation sequences with simulation coils to test the quality of image and advanced performance of the scanner. Codes and procedures were developed for semiautomatic image quality analysis. When using standard ACR sequences with a split head coil, image quality passed all ACR recommended criteria. The image intensity uniformity with a simulation RF coil decreased about 34% compared with the eight-channel diagnostic head coil, while the other six image quality parameters were acceptable. Those two image quality parameters could be improved to more than 85% by built-in intensity calibration methods. In the simulation sequences test, the contrast resolution was sensitive to the FOV and matrix settings. The geometric distortion of simulation sequences such as T1-weighted and T2-weighted images was well-controlled in the isocenter and 10 cm off-center within a range of ±1% (2 mm). We developed a semiautomatic image quality analysis method for quantitative evaluation of images and commissioning of an MRI simulator. The baseline performances of simulation RF coils and pulse sequences have been established for routine QA. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  17. Texture classification of lung computed tomography images

    NASA Astrophysics Data System (ADS)

    Pheng, Hang See; Shamsuddin, Siti M.

    2013-03-01

    Current development of algorithms in computer-aided diagnosis (CAD) scheme is growing rapidly to assist the radiologist in medical image interpretation. Texture analysis of computed tomography (CT) scans is one of important preliminary stage in the computerized detection system and classification for lung cancer. Among different types of images features analysis, Haralick texture with variety of statistical measures has been used widely in image texture description. The extraction of texture feature values is essential to be used by a CAD especially in classification of the normal and abnormal tissue on the cross sectional CT images. This paper aims to compare experimental results using texture extraction and different machine leaning methods in the classification normal and abnormal tissues through lung CT images. The machine learning methods involve in this assessment are Artificial Immune Recognition System (AIRS), Naive Bayes, Decision Tree (J48) and Backpropagation Neural Network. AIRS is found to provide high accuracy (99.2%) and sensitivity (98.0%) in the assessment. For experiments and testing purpose, publicly available datasets in the Reference Image Database to Evaluate Therapy Response (RIDER) are used as study cases.

  18. The RABiT: A Rapid Automated Biodosimetry Tool For Radiological Triage. II. Technological Developments

    PubMed Central

    Garty, Guy; Chen, Youhua; Turner, Helen; Zhang, Jian; Lyulko, Oleksandra; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Yao, Y. Lawrence; Brenner, David J.

    2011-01-01

    Purpose Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. Materials and methods The RABiT analyzes fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cutoff dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. Results We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Conclusions Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day. PMID:21557703

  19. Challenging Anthropocentric Analysis of Visual Data: A Relational Materialist Methodological Approach to Educational Research

    ERIC Educational Resources Information Center

    Hultman, Karin; Lenz Taguchi, Hillevi

    2010-01-01

    The purpose of this paper is to challenge the habitual anthropocentric gaze we use when analysing educational data, which takes human beings as the starting point and centre, and gives humans a self-evident higher position above other matter in reality. By enacting analysis of photographic images from a preschool playground, using a "relational…

  20. Validity of multislice computerized tomography for diagnosis of maxillofacial fractures using an independent workstation.

    PubMed

    Dos Santos, Denise Takehana; Costa e Silva, Adriana Paula Andrade; Vannier, Michael Walter; Cavalcanti, Marcelo Gusmão Paraiso

    2004-12-01

    The purpose of this study was to demonstrate the sensitivity and specificity of multislice computerized tomography (CT) for diagnosis of maxillofacial fractures following specific protocols using an independent workstation. The study population consisted of 56 patients with maxillofacial fractures who were submitted to a multislice CT. The original data were transferred to an independent workstation using volumetric imaging software to generate axial images and simultaneous multiplanar (MPR) and 3-dimensional (3D-CT) volume rendering reconstructed images. The images were then processed and interpreted by 2 examiners using the following protocols independently of each other: axial, MPR/axial, 3D-CT images, and the association of axial/MPR/3D images. The clinical/surgical findings were considered the gold standard corroborating the diagnosis of the fractures and their anatomic localization. The statistical analysis was carried out using validity and chi-squared tests. The association of axial/MPR/3D images indicated a higher sensitivity (range 95.8%) and specificity (range 99%) than the other methods regarding the analysis of all regions. CT imaging demonstrated high specificity and sensitivity for maxillofacial fractures. The association of axial/MPR/3D-CT images added important information in relationship to other CT protocols.

  1. 42 CFR 414.68 - Imaging accreditation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of the organization's data management and analysis system for its surveys and accreditation decisions... organizations. (iv) Notify CMS, in writing, at least 30 calendar days in advance of the effective date of any... to designate and approve independent accreditation organizations for purposes of accrediting...

  2. 42 CFR 414.68 - Imaging accreditation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of the organization's data management and analysis system for its surveys and accreditation decisions... organizations. (iv) Notify CMS, in writing, at least 30 calendar days in advance of the effective date of any... to designate and approve independent accreditation organizations for purposes of accrediting...

  3. Characterization of Atrophic Changes in the Cerebral Cortex Using Fractal Dimensional Analysis

    PubMed Central

    George, Anuh T.; Jeon, Tina; Hynan, Linda S.; Youn, Teddy S.; Kennedy, David N.; Dickerson, Bradford

    2010-01-01

    The purpose of this project is to apply a modified fractal analysis technique to high-resolution T1 weighted magnetic resonance images in order to quantify the alterations in the shape of the cerebral cortex that occur in patients with Alzheimer’s disease. Images were selected from the Alzheimer’s Disease Neuroimaging Initiative database (Control N=15, Mild-Moderate AD N=15). The images were segmented using a semi-automated analysis program. Four coronal and three axial profiles of the cerebral cortical ribbon were created. The fractal dimensions (Df) of the cortical ribbons were then computed using a box-counting algorithm. The mean Df of the cortical ribbons from AD patients were lower than age-matched controls on six of seven profiles. The fractal measure has regional variability which reflects local differences in brain structure. Fractal dimension is complementary to volumetric measures and may assist in identifying disease state or disease progression. PMID:20740072

  4. Generating land cover boundaries from remotely sensed data using object-based image analysis: overview and epidemiological application.

    PubMed

    Maxwell, Susan K

    2010-12-01

    Satellite imagery and aerial photography represent a vast resource to significantly enhance environmental mapping and modeling applications for use in understanding spatio-temporal relationships between environment and health. Deriving boundaries of land cover objects, such as trees, buildings, and crop fields, from image data has traditionally been performed manually using a very time consuming process of hand digitizing. Boundary detection algorithms are increasingly being applied using object-based image analysis (OBIA) technology to automate the process. The purpose of this paper is to present an overview and demonstrate the application of OBIA for delineating land cover features at multiple scales using a high resolution aerial photograph (1 m) and a medium resolution Landsat image (30 m) time series in the context of a pesticide spray drift exposure application. Copyright © 2010. Published by Elsevier Ltd.

  5. Enhancement of low visibility aerial images using histogram truncation and an explicit Retinex representation for balancing contrast and color consistency

    NASA Astrophysics Data System (ADS)

    Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup

    2017-06-01

    This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.

  6. TU-FG-BRB-07: GPU-Based Prompt Gamma Ray Imaging From Boron Neutron Capture Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, S; Suh, T; Yoon, D

    Purpose: The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. Methods: To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU).more » Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. Results: The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). Conclusion: The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray reconstruction using the GPU computation for BNCT simulations.« less

  7. SU-E-I-100: Heterogeneity Studying for Primary and Lymphoma Tumors by Using Multi-Scale Image Texture Analysis with PET-CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Wang, Qinfen; Li, H

    Purpose: The purpose of this research is studying tumor heterogeneity of the primary and lymphoma by using multi-scale texture analysis with PET-CT images, where the tumor heterogeneity is expressed by texture features. Methods: Datasets were collected from 12 lung cancer patients, and both of primary and lymphoma tumors were detected with all these patients. All patients underwent whole-body 18F-FDG PET/CT scan before treatment.The regions of interest (ROI) of primary and lymphoma tumor were contoured by experienced clinical doctors. Then the ROI of primary and lymphoma tumor is extracted automatically by using Matlab software. According to the geometry size of contourmore » structure, the images of tumor are decomposed by multi-scale method.Wavelet transform was performed on ROI structures within images by L layers sampling, and then wavelet sub-bands which have the same size of the original image are obtained. The number of sub-bands is 3L+1.The gray level co-occurrence matrix (GLCM) is calculated within different sub-bands, thenenergy, inertia, correlation and gray in-homogeneity were extracted from GLCM.Finally, heterogeneity statistical analysis was studied for primary and lymphoma tumor using the texture features. Results: Energy, inertia, correlation and gray in-homogeneity are calculated with our experiments for heterogeneity statistical analysis.Energy for primary and lymphomatumor is equal with the same patient, while gray in-homogeneity and inertia of primaryare 2.59595±0.00855, 0.6439±0.0007 respectively. Gray in-homogeneity and inertia of lymphoma are 2.60115±0.00635, 0.64435±0.00055 respectively. The experiments showed that the volume of lymphoma is smaller than primary tumor, but thegray in-homogeneity and inertia were higher than primary tumor with the same patient, and the correlation with lymphoma tumors is zero, while the correlation with primary tumor isslightly strong. Conclusion: This studying showed that there were effective heterogeneity differences between primary and lymphoma tumor by multi-scale image texture analysis. This work is supported by National Natural Science Foundation of China (No. 61201441), Research Fund for Excellent Young and Middle-aged Scientists of Shandong Province (No. BS2012DX038), Project of Shandong Province Higher Educational Science and Technology Program (No. J12LN23), Jinan youth science and technology star (No.20120109)« less

  8. Electronic Still Camera Project on STS-48

    NASA Technical Reports Server (NTRS)

    1991-01-01

    On behalf of NASA, the Office of Commercial Programs (OCP) has signed a Technical Exchange Agreement (TEA) with Autometric, Inc. (Autometric) of Alexandria, Virginia. The purpose of this agreement is to evaluate and analyze a high-resolution Electronic Still Camera (ESC) for potential commercial applications. During the mission, Autometric will provide unique photo analysis and hard-copy production. Once the mission is complete, Autometric will furnish NASA with an analysis of the ESC s capabilities. Electronic still photography is a developing technology providing the means by which a hand held camera electronically captures and produces a digital image with resolution approaching film quality. The digital image, stored on removable hard disks or small optical disks, can be converted to a format suitable for downlink transmission, or it can be enhanced using image processing software. The on-orbit ability to enhance or annotate high-resolution images and then downlink these images in real-time will greatly improve Space Shuttle and Space Station capabilities in Earth observations and on-board photo documentation.

  9. Experimental assessment and analysis of super-resolution in fluorescence microscopy based on multiple-point spread function fitting of spectrally demultiplexed images

    NASA Astrophysics Data System (ADS)

    Nishimura, Takahiro; Kimura, Hitoshi; Ogura, Yusuke; Tanida, Jun

    2018-06-01

    This paper presents an experimental assessment and analysis of super-resolution microscopy based on multiple-point spread function fitting of spectrally demultiplexed images using a designed DNA structure as a test target. For the purpose, a DNA structure was designed to have binding sites at a certain interval that is smaller than the diffraction limit. The structure was labeled with several types of quantum dots (QDs) to acquire their spatial information as spectrally encoded images. The obtained images are analyzed with a point spread function multifitting algorithm to determine the QD locations that indicate the binding site positions. The experimental results show that the labeled locations can be observed beyond the diffraction-limited resolution using three-colored fluorescence images that were obtained with a confocal fluorescence microscope. Numerical simulations show that labeling with eight types of QDs enables the positions aligned at 27.2-nm pitches on the DNA structure to be resolved with high accuracy.

  10. Texture Analysis of Chaotic Coupled Map Lattices Based Image Encryption Algorithm

    NASA Astrophysics Data System (ADS)

    Khan, Majid; Shah, Tariq; Batool, Syeda Iram

    2014-09-01

    As of late, data security is key in different enclosures like web correspondence, media frameworks, therapeutic imaging, telemedicine and military correspondence. In any case, a large portion of them confronted with a few issues, for example, the absence of heartiness and security. In this letter, in the wake of exploring the fundamental purposes of the chaotic trigonometric maps and the coupled map lattices, we have presented the algorithm of chaos-based image encryption based on coupled map lattices. The proposed mechanism diminishes intermittent impact of the ergodic dynamical systems in the chaos-based image encryption. To assess the security of the encoded image of this scheme, the association of two nearby pixels and composition peculiarities were performed. This algorithm tries to minimize the problems arises in image encryption.

  11. Photogrammetry of the solar aureole

    NASA Technical Reports Server (NTRS)

    Deepak, A.

    1978-01-01

    This paper presents a photogrammetric analysis of the solar aureole for the purpose of making photographic sky radiance measurements for determining aerosol physical characteristics. A photograph is essentially a projection of a 3-D object space onto a 2-D image space. Photogrammetry deals with relations that exist between the object and the image spaces. The main problem of photogrammetry is the reconstruction of configurations in the object space by means of the image space data. It is shown that the almucantar projects onto the photographic plane as a conic section and the sun vertical as a straight line.

  12. Portfolio: a prototype workstation for development and evaluation of tools for analysis and management of digital portal images.

    PubMed

    Boxwala, A A; Chaney, E L; Fritsch, D S; Friedman, C P; Rosenman, J G

    1998-09-01

    The purpose of this investigation was to design and implement a prototype physician workstation, called PortFolio, as a platform for developing and evaluating, by means of controlled observer studies, user interfaces and interactive tools for analyzing and managing digital portal images. The first observer study was designed to measure physician acceptance of workstation technology, as an alternative to a view box, for inspection and analysis of portal images for detection of treatment setup errors. The observer study was conducted in a controlled experimental setting to evaluate physician acceptance of the prototype workstation technology exemplified by PortFolio. PortFolio incorporates a windows user interface, a compact kit of carefully selected image analysis tools, and an object-oriented data base infrastructure. The kit evaluated in the observer study included tools for contrast enhancement, registration, and multimodal image visualization. Acceptance was measured in the context of performing portal image analysis in a structured protocol designed to simulate clinical practice. The acceptability and usage patterns were measured from semistructured questionnaires and logs of user interactions. Radiation oncologists, the subjects for this study, perceived the tools in PortFolio to be acceptable clinical aids. Concerns were expressed regarding user efficiency, particularly with respect to the image registration tools. The results of our observer study indicate that workstation technology is acceptable to radiation oncologists as an alternative to a view box for clinical detection of setup errors from digital portal images. Improvements in implementation, including more tools and a greater degree of automation in the image analysis tasks, are needed to make PortFolio more clinically practical.

  13. Foreign object detection and removal to improve automated analysis of chest radiographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogeweg, Laurens; Sanchez, Clara I.; Melendez, Jaime

    2013-07-15

    Purpose: Chest radiographs commonly contain projections of foreign objects, such as buttons, brassier clips, jewellery, or pacemakers and wires. The presence of these structures can substantially affect the output of computer analysis of these images. An automated method is presented to detect, segment, and remove foreign objects from chest radiographs.Methods: Detection is performed using supervised pixel classification with a kNN classifier, resulting in a probability estimate per pixel to belong to a projected foreign object. Segmentation is performed by grouping and post-processing pixels with a probability above a certain threshold. Next, the objects are replaced by texture inpainting.Results: The methodmore » is evaluated in experiments on 257 chest radiographs. The detection at pixel level is evaluated with receiver operating characteristic analysis on pixels within the unobscured lung fields and an A{sub z} value of 0.949 is achieved. Free response operator characteristic analysis is performed at the object level, and 95.6% of objects are detected with on average 0.25 false positive detections per image. To investigate the effect of removing the detected objects through inpainting, a texture analysis system for tuberculosis detection is applied to images with and without pathology and with and without foreign object removal. Unprocessed, the texture analysis abnormality score of normal images with foreign objects is comparable to those with pathology. After removing foreign objects, the texture score of normal images with and without foreign objects is similar, while abnormal images, whether they contain foreign objects or not, achieve on average higher scores.Conclusions: The authors conclude that removal of foreign objects from chest radiographs is feasible and beneficial for automated image analysis.« less

  14. Detection and measurement of plant disease symptoms using visible-wavelength photography and image analysis

    USDA-ARS?s Scientific Manuscript database

    Disease assessment is required for many purposes including predicting yield loss, monitoring and forecasting epidemics, judging host resistance, and for studying fundamental biological host-pathogen processes. Inaccurate and/or imprecise assessments can result in incorrect conclusions or actions. Im...

  15. Automated image analysis of the severity of foliar citrus canker symptoms

    USDA-ARS?s Scientific Manuscript database

    Citrus canker (caused by Xanthomonas citri subsp. citri) is a destructive disease, reducing yield, and rendering fruit unfit for fresh sale. Accurate assessment of citrus canker severity and other diseases is needed for several purposes, including monitoring epidemics and evaluation of germplasm. ...

  16. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  17. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  18. Brand trust and image: effects on customer satisfaction.

    PubMed

    Khodadad Hosseini, Sayed Hamid; Behboudi, Leila

    2017-08-14

    Purpose The purpose of this paper is to investigate brand trust and brand image effects on healthcare service users. Nowadays, managers and health activists are showing increased tendency to marketing and branding to attract and satisfy customers. Design/methodology/approach The current study's design is based on a conceptual model examining brand trust and brand image effects on customer satisfaction. Data obtained from 240 questionnaires (310 respondents) were analyzed using path analysis. Findings Results revealed that the most effective items bearing the highest influence on customer satisfaction and on benefiting from healthcare services include brand image, staff sincerity to its patients, interactions with physicians and rapport. Research limitations/implications This study needs to be conducted in different hospitals and with different patients, which would lead to the model's expansion and its influence on the patient satisfaction. Originality/value Being the first study that simultaneously addresses brand trust and brand image effects on customer satisfaction, this research provides in-depth insights into healthcare marketing. Moreover, identifying significant components associated with healthcare branding helps managers and healthcare activists to create and protect their brands and, consequently, leading to an increased profitability resulting from the enhanced consumer satisfaction. Additionally, it would probably facilitate purchasing processes during the service selection.

  19. Improvement of automatic hemorrhage detection methods using brightness correction on fundus images

    NASA Astrophysics Data System (ADS)

    Hatanaka, Yuji; Nakagawa, Toshiaki; Hayashi, Yoshinori; Kakogawa, Masakatsu; Sawada, Akira; Kawase, Kazuhide; Hara, Takeshi; Fujita, Hiroshi

    2008-03-01

    We have been developing several automated methods for detecting abnormalities in fundus images. The purpose of this study is to improve our automated hemorrhage detection method to help diagnose diabetic retinopathy. We propose a new method for preprocessing and false positive elimination in the present study. The brightness of the fundus image was changed by the nonlinear curve with brightness values of the hue saturation value (HSV) space. In order to emphasize brown regions, gamma correction was performed on each red, green, and blue-bit image. Subsequently, the histograms of each red, blue, and blue-bit image were extended. After that, the hemorrhage candidates were detected. The brown regions indicated hemorrhages and blood vessels and their candidates were detected using density analysis. We removed the large candidates such as blood vessels. Finally, false positives were removed by using a 45-feature analysis. To evaluate the new method for the detection of hemorrhages, we examined 125 fundus images, including 35 images with hemorrhages and 90 normal images. The sensitivity and specificity for the detection of abnormal cases was were 80% and 88%, respectively. These results indicate that the new method may effectively improve the performance of our computer-aided diagnosis system for hemorrhages.

  20. Haemodynamic imaging of thoracic stent-grafts by computational fluid dynamics (CFD): presentation of a patient-specific method combining magnetic resonance imaging and numerical simulations.

    PubMed

    Midulla, Marco; Moreno, Ramiro; Baali, Adil; Chau, Ming; Negre-Salvayre, Anne; Nicoud, Franck; Pruvo, Jean-Pierre; Haulon, Stephan; Rousseau, Hervé

    2012-10-01

    In the last decade, there was been increasing interest in finding imaging techniques able to provide a functional vascular imaging of the thoracic aorta. The purpose of this paper is to present an imaging method combining magnetic resonance imaging (MRI) and computational fluid dynamics (CFD) to obtain a patient-specific haemodynamic analysis of patients treated by thoracic endovascular aortic repair (TEVAR). MRI was used to obtain boundary conditions. MR angiography (MRA) was followed by cardiac-gated cine sequences which covered the whole thoracic aorta. Phase contrast imaging provided the inlet and outlet profiles. A CFD mesh generator was used to model the arterial morphology, and wall movements were imposed according to the cine imaging. CFD runs were processed using the finite volume (FV) method assuming blood as a homogeneous Newtonian fluid. Twenty patients (14 men; mean age 62.2 years) with different aortic lesions were evaluated. Four-dimensional mapping of velocity and wall shear stress were obtained, depicting different patterns of flow (laminar, turbulent, stenosis-like) and local alterations of parietal stress in-stent and along the native aorta. A computational method using a combined approach with MRI appears feasible and seems promising to provide detailed functional analysis of thoracic aorta after stent-graft implantation. • Functional vascular imaging of the thoracic aorta offers new diagnostic opportunities • CFD can model vascular haemodynamics for clinical aortic problems • Combining CFD with MRI offers patient specific method of aortic analysis • Haemodynamic analysis of stent-grafts could improve clinical management and follow-up.

  1. Looking back to inform the future: The role of cognition in forest disturbance characterization from remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Bianchetti, Raechel Anne

    Remotely sensed images have become a ubiquitous part of our daily lives. From novice users, aiding in search and rescue missions using tools such as TomNod, to trained analysts, synthesizing disparate data to address complex problems like climate change, imagery has become central to geospatial problem solving. Expert image analysts are continually faced with rapidly developing sensor technologies and software systems. In response to these cognitively demanding environments, expert analysts develop specialized knowledge and analytic skills to address increasingly complex problems. This study identifies the knowledge, skills, and analytic goals of expert image analysts tasked with identification of land cover and land use change. Analysts participating in this research are currently working as part of a national level analysis of land use change, and are well versed with the use of TimeSync, forest science, and image analysis. The results of this study benefit current analysts as it improves their awareness of their mental processes used during the image interpretation process. The study also can be generalized to understand the types of knowledge and visual cues that analysts use when reasoning with imagery for purposes beyond land use change studies. Here a Cognitive Task Analysis framework is used to organize evidence from qualitative knowledge elicitation methods for characterizing the cognitive aspects of the TimeSync image analysis process. Using a combination of content analysis, diagramming, semi-structured interviews, and observation, the study highlights the perceptual and cognitive elements of expert remote sensing interpretation. Results show that image analysts perform several standard cognitive processes, but flexibly employ these processes in response to various contextual cues. Expert image analysts' ability to think flexibly during their analysis process was directly related to their amount of image analysis experience. Additionally, results show that the basic Image Interpretation Elements continue to be important despite technological augmentation of the interpretation process. These results are used to derive a set of design guidelines for developing geovisual analytic tools and training to support image analysis.

  2. Medical image integrity control and forensics based on watermarking--approximating local modifications and identifying global image alterations.

    PubMed

    Huang, H; Coatrieux, G; Shu, H Z; Luo, L M; Roux, Ch

    2011-01-01

    In this paper we present a medical image integrity verification system that not only allows detecting and approximating malevolent local image alterations (e.g. removal or addition of findings) but is also capable to identify the nature of global image processing applied to the image (e.g. lossy compression, filtering …). For that purpose, we propose an image signature derived from the geometric moments of pixel blocks. Such a signature is computed over regions of interest of the image and then watermarked in regions of non interest. Image integrity analysis is conducted by comparing embedded and recomputed signatures. If any, local modifications are approximated through the determination of the parameters of the nearest generalized 2D Gaussian. Image moments are taken as image features and serve as inputs to one classifier we learned to discriminate the type of global image processing. Experimental results with both local and global modifications illustrate the overall performances of our approach.

  3. Predictive images of postoperative levator resection outcome using image processing software

    PubMed Central

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    Purpose This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Methods Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Results Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Conclusion Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. PMID:27757008

  4. Implementation Analysis of Cutting Tool Carbide with Cast Iron Material S45 C on Universal Lathe

    NASA Astrophysics Data System (ADS)

    Junaidi; hestukoro, Soni; yanie, Ahmad; Jumadi; Eddy

    2017-12-01

    Cutting tool is the tools lathe. Cutting process tool CARBIDE with Cast Iron Material Universal Lathe which is commonly found at Analysiscutting Process by some aspects numely Cutting force, Cutting Speed, Cutting Power, Cutting Indication Power, Temperature Zone 1 and Temperatur Zone 2. Purpose of this Study was to determine how big the cutting Speed, Cutting Power, electromotor Power,Temperatur Zone 1 and Temperatur Zone 2 that drives the chisel cutting CARBIDE in the Process of tur ning Cast Iron Material. Cutting force obtained from image analysis relationship between the recommended Component Cuting Force with plane of the cut and Cutting Speed obtained from image analysis of relationships between the recommended Cutting Speed Feed rate.

  5. Evaluation of chronic periapical lesions by digital subtraction radiography by using Adobe Photoshop CS: a technical report.

    PubMed

    Carvalho, Fabiola B; Gonçalves, Marcelo; Tanomaru-Filho, Mário

    2007-04-01

    The purpose of this study was to describe a new technique by using Adobe Photoshop CS (San Jose, CA) image-analysis software to evaluate the radiographic changes of chronic periapical lesions after root canal treatment by digital subtraction radiography. Thirteen upper anterior human teeth with pulp necrosis and radiographic image of chronic periapical lesion were endodontically treated and radiographed 0, 2, 4, and 6 months after root canal treatment by using a film holder. The radiographic films were automatically developed and digitized. The radiographic images taken 0, 2, 4, and 6 months after root canal therapy were submitted to digital subtraction in pairs (0 and 2 months, 2 and 4 months, and 4 and 6 months) choosing "image," "calculation," "subtract," and "new document" tools from Adobe Photoshop CS image-analysis software toolbar. The resulting images showed areas of periapical healing in all cases. According to this methodology, the healing or expansion of periapical lesions can be evaluated by means of digital subtraction radiography by using Adobe Photoshop CS software.

  6. An Analysis of Teacher News in Turkish Printed Media within the Context of Teachers' Image

    ERIC Educational Resources Information Center

    Polat, Hüseyin; Ünisen, Ali

    2016-01-01

    The purpose of this study was to analyze the news about teachers in daily newspapers circulated in Turkey. To this end, the newspapers of Zaman, Posta, Hurriyet, Sabah, and Cumhuriyet were selected and scanned for the teacher news between the dates of 01 January 2014 and 31 December 2014. Document analysis technique was used for the scanned news.…

  7. Multifractal geometry in analysis and processing of digital retinal photographs for early diagnosis of human diabetic macular edema.

    PubMed

    Tălu, Stefan

    2013-07-01

    The purpose of this paper is to determine a quantitative assessment of the human retinal vascular network architecture for patients with diabetic macular edema (DME). Multifractal geometry and lacunarity parameters are used in this study. A set of 10 segmented and skeletonized human retinal images, corresponding to both normal (five images) and DME states of the retina (five images), from the DRIVE database was analyzed using the Image J software. Statistical analyses were performed using Microsoft Office Excel 2003 and GraphPad InStat software. The human retinal vascular network architecture has a multifractal geometry. The average of generalized dimensions (Dq) for q = 0, 1, 2 of the normal images (segmented versions), is similar to the DME cases (segmented versions). The average of generalized dimensions (Dq) for q = 0, 1 of the normal images (skeletonized versions), is slightly greater than the DME cases (skeletonized versions). However, the average of D2 for the normal images (skeletonized versions) is similar to the DME images. The average of lacunarity parameter, Λ, for the normal images (segmented and skeletonized versions) is slightly lower than the corresponding values for DME images (segmented and skeletonized versions). The multifractal and lacunarity analysis provides a non-invasive predictive complementary tool for an early diagnosis of patients with DME.

  8. Novel methods for parameter-based analysis of myocardial tissue in MR images

    NASA Astrophysics Data System (ADS)

    Hennemuth, A.; Behrens, S.; Kuehnel, C.; Oeltze, S.; Konrad, O.; Peitgen, H.-O.

    2007-03-01

    The analysis of myocardial tissue with contrast-enhanced MR yields multiple parameters, which can be used to classify the examined tissue. Perfusion images are often distorted by motion, while late enhancement images are acquired with a different size and resolution. Therefore, it is common to reduce the analysis to a visual inspection, or to the examination of parameters related to the 17-segment-model proposed by the American Heart Association (AHA). As this simplification comes along with a considerable loss of information, our purpose is to provide methods for a more accurate analysis regarding topological and functional tissue features. In order to achieve this, we implemented registration methods for the motion correction of the perfusion sequence and the matching of the late enhancement information onto the perfusion image and vice versa. For the motion corrected perfusion sequence, vector images containing the voxel enhancement curves' semi-quantitative parameters are derived. The resulting vector images are combined with the late enhancement information and form the basis for the tissue examination. For the exploration of data we propose different modes: the inspection of the enhancement curves and parameter distribution in areas automatically segmented using the late enhancement information, the inspection of regions segmented in parameter space by user defined threshold intervals and the topological comparison of regions segmented with different settings. Results showed a more accurate detection of distorted regions in comparison to the AHA-model-based evaluation.

  9. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  10. An in vitro comparison of subjective image quality of panoramic views acquired via 2D or 3D imaging.

    PubMed

    Pittayapat, P; Galiti, D; Huang, Y; Dreesen, K; Schreurs, M; Souza, P Couto; Rubira-Bullen, I R F; Westphalen, F H; Pauwels, R; Kalema, G; Willems, G; Jacobs, R

    2013-01-01

    The objective of this study is to compare subjective image quality and diagnostic validity of cone-beam CT (CBCT) panoramic reformatting with digital panoramic radiographs. Four dry human skulls and two formalin-fixed human heads were scanned using nine different CBCTs, one multi-slice CT (MSCT) and one standard digital panoramic device. Panoramic views were generated from CBCTs in four slice thicknesses. Seven observers scored image quality and visibility of 14 anatomical structures. Four observers repeated the observation after 4 weeks. Digital panoramic radiographs showed significantly better visualization of anatomical structures except for the condyle. Statistical analysis of image quality showed that the 3D imaging modalities (CBCTs and MSCT) were 7.3 times more likely to receive poor scores than the 2D modality. Yet, image quality from NewTom VGi® and 3D Accuitomo 170® was almost equivalent to that of digital panoramic radiographs with respective odds ratio estimates of 1.2 and 1.6 at 95% Wald confidence limits. A substantial overall agreement amongst observers was found. Intra-observer agreement was moderate to substantial. While 2D-panoramic images are significantly better for subjective diagnosis, 2/3 of the 3D-reformatted panoramic images are moderate or good for diagnostic purposes. Panoramic reformattings from particular CBCTs are comparable to digital panoramic images concerning the overall image quality and visualization of anatomical structures. This clinically implies that a 3D-derived panoramic view can be generated for diagnosis with a recommended 20-mm slice thickness, if CBCT data is a priori available for other purposes.

  11. Multi-Purpose Crew Vehicle Camera Asset Planning: Imagery Previsualization

    NASA Technical Reports Server (NTRS)

    Beaulieu, K.

    2014-01-01

    Using JSC-developed and other industry-standard off-the-shelf 3D modeling, animation, and rendering software packages, the Image Science Analysis Group (ISAG) supports Orion Project imagery planning efforts through dynamic 3D simulation and realistic previsualization of ground-, vehicle-, and air-based camera output.

  12. Colony fingerprint for discrimination of microbial species based on lensless imaging of microcolonies

    PubMed Central

    Maeda, Yoshiaki; Dobashi, Hironori; Sugiyama, Yui; Saeki, Tatsuya; Lim, Tae-kyu; Harada, Manabu; Matsunaga, Tadashi; Yoshino, Tomoko

    2017-01-01

    Detection and identification of microbial species are crucial in a wide range of industries, including production of beverages, foods, cosmetics, and pharmaceuticals. Traditionally, colony formation and its morphological analysis (e.g., size, shape, and color) with a naked eye have been employed for this purpose. However, such a conventional method is time consuming, labor intensive, and not very reproducible. To overcome these problems, we propose a novel method that detects microcolonies (diameter 10–500 μm) using a lensless imaging system. When comparing colony images of five microorganisms from different genera (Escherichia coli, Salmonella enterica, Pseudomonas aeruginosa, Staphylococcus aureus, and Candida albicans), the images showed obvious different features. Being closely related species, St. aureus and St. epidermidis resembled each other, but the imaging analysis could extract substantial information (colony fingerprints) including the morphological and physiological features, and linear discriminant analysis of the colony fingerprints distinguished these two species with 100% of accuracy. Because this system may offer many advantages such as high-throughput testing, lower costs, more compact equipment, and ease of automation, it holds promise for microbial detection and identification in various academic and industrial areas. PMID:28369067

  13. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    PubMed

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  14. Meta-analysis of the technical performance of an imaging procedure: Guidelines and statistical methodology

    PubMed Central

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2017-01-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353

  15. Development of a Multi-Centre Clinical Trial Data Archiving and Analysis Platform for Functional Imaging

    NASA Astrophysics Data System (ADS)

    Driscoll, Brandon; Jaffray, David; Coolens, Catherine

    2014-03-01

    Purpose: To provide clinicians & researchers participating in multi-centre clinical trials with a central repository for large volume dynamic imaging data as well as a set of tools for providing end-to-end testing and image analysis standards of practice. Methods: There are three main pieces to the data archiving and analysis system; the PACS server, the data analysis computer(s) and the high-speed networks that connect them. Each clinical trial is anonymized using a customizable anonymizer and is stored on a PACS only accessible by AE title access control. The remote analysis station consists of a single virtual machine per trial running on a powerful PC supporting multiple simultaneous instances. Imaging data management and analysis is performed within ClearCanvas Workstation® using custom designed plug-ins for kinetic modelling (The DCE-Tool®), quality assurance (The DCE-QA Tool) and RECIST. Results: A framework has been set up currently serving seven clinical trials spanning five hospitals with three more trials to be added over the next six months. After initial rapid image transfer (+ 2 MB/s), all data analysis is done server side making it robust and rapid. This has provided the ability to perform computationally expensive operations such as voxel-wise kinetic modelling on very large data archives (+20 GB/50k images/patient) remotely with minimal end-user hardware. Conclusions: This system is currently in its proof of concept stage but has been used successfully to send and analyze data from remote hospitals. Next steps will involve scaling up the system with a more powerful PACS and multiple high powered analysis machines as well as adding real-time review capabilities.

  16. Automated analysis of time-lapse fluorescence microscopy images: from live cell images to intracellular foci.

    PubMed

    Dzyubachyk, Oleh; Essers, Jeroen; van Cappellen, Wiggert A; Baldeyron, Céline; Inagaki, Akiko; Niessen, Wiro J; Meijering, Erik

    2010-10-01

    Complete, accurate and reproducible analysis of intracellular foci from fluorescence microscopy image sequences of live cells requires full automation of all processing steps involved: cell segmentation and tracking followed by foci segmentation and pattern analysis. Integrated systems for this purpose are lacking. Extending our previous work in cell segmentation and tracking, we developed a new system for performing fully automated analysis of fluorescent foci in single cells. The system was validated by applying it to two common tasks: intracellular foci counting (in DNA damage repair experiments) and cell-phase identification based on foci pattern analysis (in DNA replication experiments). Experimental results show that the system performs comparably to expert human observers. Thus, it may replace tedious manual analyses for the considered tasks, and enables high-content screening. The described system was implemented in MATLAB (The MathWorks, Inc., USA) and compiled to run within the MATLAB environment. The routines together with four sample datasets are available at http://celmia.bigr.nl/. The software is planned for public release, free of charge for non-commercial use, after publication of this article.

  17. Terahertz imaging systems: a non-invasive technique for the analysis of paintings

    NASA Astrophysics Data System (ADS)

    Fukunaga, K.; Hosako, I.; Duling, I. N., III; Picollo, M.

    2009-07-01

    Terahertz (THz) imaging is an emerging technique for non-invasive analysis. Since THz waves can penetrate opaque materials, various imaging systems that use THz waves have been developed to detect, for instance, concealed weapons, illegal drugs, and defects in polymer products. The absorption of THz waves by water is extremely strong, and hence, THz waves can be used to monitor the water content in various objects. THz imaging can be performed either by transmission or by reflection of THz waves. In particular, time domain reflection imaging uses THz pulses that propagate in specimens, and in this technique, pulses reflected from the surface and from the internal boundaries of the specimen are detected. In general, the internal structure is observed in crosssectional images obtained using micro-specimens taken from the work that is being analysed. On the other hand, in THz time-domain imaging, a map of the layer of interest can be easily obtained without collecting any samples. When realtime imaging is required, for example, in the investigation of the effect of a solvent or during the monitoring of water content, a THz camera can be used. The first application of THz time-domain imaging in the analysis of a historical tempera masterpiece was performed on the panel painting Polittico di Badia by Giotto, of the permanent collection of the Uffizi Gallery. The results of that analysis revealed that the work is composed of two layers of gypsum, with a canvas between these layers. In the paint layer, gold foils covered by paint were clearly observed, and the consumption or ageing of gold could be estimated by noting the amount of reflection. These results prove that THz imaging can yield useful information for conservation and restoration purposes.

  18. Voxel-Wise Time-Series Analysis of Quantitative MRI in Relapsing-Remitting MS: Dynamic Imaging Metrics of Disease Activity Including Pre-Lesional Changes

    DTIC Science & Technology

    2015-12-01

    other parameters match the previous simulation. A third simulation was performed to evaluate the effect of gradient and RF spoiling on the accuracy of...this increase also offers an opportunity to increase the length of the spoiler gradient and improve the accuracy of FA quanti - fication (27). To...Relaxation Pouria Mossahebi,1 Vasily L. Yarnykh,2 and Alexey Samsonov3* Purpose: Cross-relaxation imaging (CRI) is a family of quanti - tative

  19. Chemical Applications of a Programmable Image Acquisition System

    NASA Astrophysics Data System (ADS)

    Ogren, Paul J.; Henry, Ian; Fletcher, Steven E. S.; Kelly, Ian

    2003-06-01

    Image analysis is widely used in chemistry, both for rapid qualitative evaluations using techniques such as thin layer chromatography (TLC) and for quantitative purposes such as well-plate measurements of analyte concentrations or fragment-size determinations in gel electrophoresis. This paper describes a programmable system for image acquisition and processing that is currently used in the laboratories of our organic and physical chemistry courses. It has also been used in student research projects in analytical chemistry and biochemistry. The potential range of applications is illustrated by brief presentations of four examples: (1) using well-plate optical transmission data to construct a standard concentration absorbance curve; (2) the quantitative analysis of acetaminophen in Tylenol and acetylsalicylic acid in aspirin using TLC with fluorescence detection; (3) the analysis of electrophoresis gels to determine DNA fragment sizes and amounts; and, (4) using color change to follow reaction kinetics. The supplemental material in JCE Online contains information on two additional examples: deconvolution of overlapping bands in protein gel electrophoresis, and the recovery of data from published images or graphs. The JCE Online material also presents additional information on each example, on the system hardware and software, and on the data analysis methodology.

  20. First experiences with in-vivo x-ray dark-field imaging of lung cancer in mice

    NASA Astrophysics Data System (ADS)

    Gromann, Lukas B.; Scherer, Kai; Yaroshenko, Andre; Bölükbas, Deniz A.; Hellbach, Katharina; Meinel, Felix G.; Braunagel, Margarita; Eickelberg, Oliver; Reiser, Maximilian F.; Pfeiffer, Franz; Meiners, Silke; Herzen, Julia

    2017-03-01

    Purpose: The purpose of the present study was to evaluate if x-ray dark-field imaging can help to visualize lung cancer in mice. Materials and Methods: The experiments were performed using mutant mice with high-grade adenocarcinomas. Eight animals with pulmonary carcinoma and eight control animals were imaged in radiography mode using a prototype small-animal x-ray dark-field scanner and three of the cancerous ones additionally in CT mode. After imaging, the lungs were harvested for histological analysis. To determine their diagnostic value, x-ray dark-field and conventional attenuation images were analyzed by three experienced readers in a blind assessment. Results radiographic imaging: The lung nodules were much clearer visualized on the dark-field radiographs compared to conventional radiographs. The loss of air-tissue interfaces in the tumor leads to a significant loss of x-ray scattering, reflected in a strong dark-field signal change. The difference between tumor and healthy tissue in terms of x-ray attenuation is significantly less pronounced. Furthermore, the signal from the overlaying structures on conventional radiographs complicates the detection of pulmonary carcinoma. Results CT imaging: The very first in-vivo CT-imaging results are quite promising as smaller tumors are often better visible in the dark-field images. However the imaging quality is still quite low, especially in the attenuation images due to un-optimized scanning parameters. Conclusion: We found a superior diagnostic performance of dark-field imaging compared to conventional attenuation based imaging, especially when it comes to the detection of small lung nodules. These results support the motivation to further develop this technique and translate it towards a clinical environment.

  1. SU-C-201-04: Quantification of Perfusion Heterogeneity Based On Texture Analysis for Fully Automatic Detection of Ischemic Deficits From Myocardial Perfusion Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Y; Huang, H; Su, T

    Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less

  2. Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography

    NASA Astrophysics Data System (ADS)

    Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung

    2006-03-01

    Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.

  3. A near-infrared fluorescence-based surgical navigation system imaging software for sentinel lymph node detection

    NASA Astrophysics Data System (ADS)

    Ye, Jinzuo; Chi, Chongwei; Zhang, Shuang; Ma, Xibo; Tian, Jie

    2014-02-01

    Sentinel lymph node (SLN) in vivo detection is vital in breast cancer surgery. A new near-infrared fluorescence-based surgical navigation system (SNS) imaging software, which has been developed by our research group, is presented for SLN detection surgery in this paper. The software is based on the fluorescence-based surgical navigation hardware system (SNHS) which has been developed in our lab, and is designed specifically for intraoperative imaging and postoperative data analysis. The surgical navigation imaging software consists of the following software modules, which mainly include the control module, the image grabbing module, the real-time display module, the data saving module and the image processing module. And some algorithms have been designed to achieve the performance of the software, for example, the image registration algorithm based on correlation matching. Some of the key features of the software include: setting the control parameters of the SNS; acquiring, display and storing the intraoperative imaging data in real-time automatically; analysis and processing of the saved image data. The developed software has been used to successfully detect the SLNs in 21 cases of breast cancer patients. In the near future, we plan to improve the software performance and it will be extensively used for clinical purpose.

  4. Monte Carlo simulation of PET/MR scanner and assessment of motion correction strategies

    NASA Astrophysics Data System (ADS)

    Işın, A.; Uzun Ozsahin, D.; Dutta, J.; Haddani, S.; El-Fakhri, G.

    2017-03-01

    Positron Emission Tomography is widely used in three dimensional imaging of metabolic body function and in tumor detection. Important research efforts are made to improve this imaging modality and powerful simulators such as GATE are used to test and develop methods for this purpose. PET requires acquisition time in the order of few minutes. Therefore, because of the natural patient movements such as respiration, the image quality can be adversely affected which drives scientists to develop motion compensation methods to improve the image quality. The goal of this study is to evaluate various image reconstructions methods with GATE simulation of a PET acquisition of the torso area. Obtained results show the need to compensate natural respiratory movements in order to obtain an image with similar quality as the reference image. Improvements are still possible in the applied motion field's extraction algorithms. Finally a statistical analysis should confirm the obtained results.

  5. Deformation Invariant Attribute Vector for Deformable Registration of Longitudinal Brain MR Images

    PubMed Central

    Li, Gang; Guo, Lei; Liu, Tianming

    2009-01-01

    This paper presents a novel approach to define deformation invariant attribute vector (DIAV) for each voxel in 3D brain image for the purpose of anatomic correspondence detection. The DIAV method is validated by using synthesized deformation in 3D brain MRI images. Both theoretic analysis and experimental studies demonstrate that the proposed DIAV is invariant to general nonlinear deformation. Moreover, our experimental results show that the DIAV is able to capture rich anatomic information around the voxels and exhibit strong discriminative ability. The DIAV has been integrated into a deformable registration algorithm for longitudinal brain MR images, and the results on both simulated and real brain images are provided to demonstrate the good performance of the proposed registration algorithm based on matching of DIAVs. PMID:19369031

  6. A Multimode Optical Imaging System for Preclinical Applications In Vivo: Technology Development, Multiscale Imaging, and Chemotherapy Assessment

    PubMed Central

    Hwang, Jae Youn; Wachsmann-Hogiu, Sebastian; Ramanujan, V. Krishnan; Ljubimova, Julia; Gross, Zeev; Gray, Harry B.; Medina-Kauwe, Lali K.; Farkas, Daniel L.

    2012-01-01

    Purpose Several established optical imaging approaches have been applied, usually in isolation, to preclinical studies; however, truly useful in vivo imaging may require a simultaneous combination of imaging modalities to examine dynamic characteristics of cells and tissues. We developed a new multimode optical imaging system designed to be application-versatile, yielding high sensitivity, and specificity molecular imaging. Procedures We integrated several optical imaging technologies, including fluorescence intensity, spectral, lifetime, intravital confocal, two-photon excitation, and bioluminescence, into a single system that enables functional multiscale imaging in animal models. Results The approach offers a comprehensive imaging platform for kinetic, quantitative, and environmental analysis of highly relevant information, with micro-to-macroscopic resolution. Applied to small animals in vivo, this provides superior monitoring of processes of interest, represented here by chemo-/nanoconstruct therapy assessment. Conclusions This new system is versatile and can be optimized for various applications, of which cancer detection and targeted treatment are emphasized here. PMID:21874388

  7. GPU accelerated optical coherence tomography angiography using strip-based registration (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Heisler, Morgan; Lee, Sieun; Mammo, Zaid; Jian, Yifan; Ju, Myeong Jin; Miao, Dongkai; Raposo, Eric; Wahl, Daniel J.; Merkur, Andrew; Navajas, Eduardo; Balaratnasingam, Chandrakumar; Beg, Mirza Faisal; Sarunic, Marinko V.

    2017-02-01

    High quality visualization of the retinal microvasculature can improve our understanding of the onset and development of retinal vascular diseases, which are a major cause of visual morbidity and are increasing in prevalence. Optical Coherence Tomography Angiography (OCT-A) images are acquired over multiple seconds and are particularly susceptible to motion artifacts, which are more prevalent when imaging patients with pathology whose ability to fixate is limited. The acquisition of multiple OCT-A images sequentially can be performed for the purpose of removing motion artifact and increasing the contrast of the vascular network through averaging. Due to the motion artifacts, a robust registration pipeline is needed before feature preserving image averaging can be performed. In this report, we present a novel method for a GPU-accelerated pipeline for acquisition, processing, segmentation, and registration of multiple, sequentially acquired OCT-A images to correct for the motion artifacts in individual images for the purpose of averaging. High performance computing, blending CPU and GPU, was introduced to accelerate processing in order to provide high quality visualization of the retinal microvasculature and to enable a more accurate quantitative analysis in a clinically useful time frame. Specifically, image discontinuities caused by rapid micro-saccadic movements and image warping due to smoother reflex movements were corrected by strip-wise affine registration estimated using Scale Invariant Feature Transform (SIFT) keypoints and subsequent local similarity-based non-rigid registration. These techniques improve the image quality, increasing the value for clinical diagnosis and increasing the range of patients for whom high quality OCT-A images can be acquired.

  8. General-purpose interface bus for multiuser, multitasking computer system

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1990-01-01

    The architecture of a multiuser, multitasking, virtual-memory computer system intended for the use by a medium-size research group is described. There are three central processing units (CPU) in the configuration, each with 16 MB memory, and two 474 MB hard disks attached. CPU 1 is designed for data analysis and contains an array processor for fast-Fourier transformations. In addition, CPU 1 shares display images viewed with the image processor. CPU 2 is designed for image analysis and display. CPU 3 is designed for data acquisition and contains 8 GPIB channels and an analog-to-digital conversion input/output interface with 16 channels. Up to 9 users can access the third CPU simultaneously for data acquisition. Focus is placed on the optimization of hardware interfaces and software, facilitating instrument control, data acquisition, and processing.

  9. Logical Meanings in Multimedia Learning Materials: A Multimodal Discourse Analysis

    ERIC Educational Resources Information Center

    Vorvilas, George

    2014-01-01

    Multimedia educational applications convey meanings through several semiotic modes (e.g. text, image, sound, etc.). There is an urgent need for multimedia designers as well as for teachers to understand the meaning potential of these artifacts and discern the communicative purposes they serve. Towards this direction, a hermeneutic semiotic…

  10. Comparison of Arterial Spin-labeling Perfusion Images at Different Spatial Normalization Methods Based on Voxel-based Statistical Analysis.

    PubMed

    Tani, Kazuki; Mio, Motohira; Toyofuku, Tatsuo; Kato, Shinichi; Masumoto, Tomoya; Ijichi, Tetsuya; Matsushima, Masatoshi; Morimoto, Shoichi; Hirata, Takumi

    2017-01-01

    Spatial normalization is a significant image pre-processing operation in statistical parametric mapping (SPM) analysis. The purpose of this study was to clarify the optimal method of spatial normalization for improving diagnostic accuracy in SPM analysis of arterial spin-labeling (ASL) perfusion images. We evaluated the SPM results of five spatial normalization methods obtained by comparing patients with Alzheimer's disease or normal pressure hydrocephalus complicated with dementia and cognitively healthy subjects. We used the following methods: 3DT1-conventional based on spatial normalization using anatomical images; 3DT1-DARTEL based on spatial normalization with DARTEL using anatomical images; 3DT1-conventional template and 3DT1-DARTEL template, created by averaging cognitively healthy subjects spatially normalized using the above methods; and ASL-DARTEL template created by averaging cognitively healthy subjects spatially normalized with DARTEL using ASL images only. Our results showed that ASL-DARTEL template was small compared with the other two templates. Our SPM results obtained with ASL-DARTEL template method were inaccurate. Also, there were no significant differences between 3DT1-conventional and 3DT1-DARTEL template methods. In contrast, the 3DT1-DARTEL method showed higher detection sensitivity, and precise anatomical location. Our SPM results suggest that we should perform spatial normalization with DARTEL using anatomical images.

  11. Mesoscale and severe storms (Mass) data management and analysis system

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.; Dickerson, M.

    1984-01-01

    Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.

  12. Volumetric calculation using low cost unmanned aerial vehicle (UAV) approach

    NASA Astrophysics Data System (ADS)

    Rahman, A. A. Ab; Maulud, K. N. Abdul; Mohd, F. A.; Jaafar, O.; Tahar, K. N.

    2017-12-01

    Unmanned Aerial Vehicles (UAV) technology has evolved dramatically in the 21st century. It is used by both military and general public for recreational purposes and mapping work. Operating cost for UAV is much cheaper compared to that of normal aircraft and it does not require a large work space. The UAV systems have similar functions with the LIDAR and satellite images technologies. These systems require a huge cost, labour and time consumption to produce elevation and dimension data. Measurement of difficult objects such as water tank can also be done by using UAV. The purpose of this paper is to show the capability of UAV to compute the volume of water tank based on a different number of images and control points. The results were compared with the actual volume of the tank to validate the measurement. In this study, the image acquisition was done using Phantom 3 Professional, which is a low cost UAV. The analysis in this study is based on different volume computations using two and four control points with variety set of UAV images. The results show that more images will provide a better quality measurement. With 95 images and four GCP, the error percentage to the actual volume is about 5%. Four controls are enough to get good results but more images are needed, estimated about 115 until 220 images. All in all, it can be concluded that the low cost UAV has a potential to be used for volume of water and dimension measurement.

  13. Quantitative analysis for peripheral vascularity assessment based on clinical photoacoustic and ultrasound images

    NASA Astrophysics Data System (ADS)

    Murakoshi, Dai; Hirota, Kazuhiro; Ishii, Hiroyasu; Hashimoto, Atsushi; Ebata, Tetsurou; Irisawa, Kaku; Wada, Takatsugu; Hayakawa, Toshiro; Itoh, Kenji; Ishihara, Miya

    2018-02-01

    Photoacoustic (PA) imaging technology is expected to be applied to clinical assessment for peripheral vascularity. We started a clinical evaluation with the prototype PA imaging system we recently developed. Prototype PA imaging system was composed with in-house Q-switched Alexandrite laser system which emits short-pulsed laser with 750 nm wavelength, handheld ultrasound transducer where illumination optics were integrated and signal processing for PA image reconstruction implemented in the clinical ultrasound (US) system. For the purpose of quantitative assessment of PA images, an image analyzing function has been developed and applied to clinical PA images. In this analyzing function, vascularity derived from PA signal intensity ranged for prescribed threshold was defined as a numerical index of vessel fulfillment and calculated for the prescribed region of interest (ROI). Skin surface was automatically detected by utilizing B-mode image acquired simultaneously with PA image. Skinsurface position is utilized to place the ROI objectively while avoiding unwanted signals such as artifacts which were imposed due to melanin pigment in the epidermal layer which absorbs laser emission and generates strong PA signals. Multiple images were available to support the scanned image set for 3D viewing. PA images for several fingers of patients with systemic sclerosis (SSc) were quantitatively assessed. Since the artifact region is trimmed off in PA images, the visibility of vessels with rather low PA signal intensity on the 3D projection image was enhanced and the reliability of the quantitative analysis was improved.

  14. Characterization of Propylene Glycol-Mitigated Freeze/Thaw Agglomeration of a Frozen Liquid nOMV Vaccine Formulation by Static Light Scattering and Micro-Flow Imaging.

    PubMed

    Mensch, Christopher D; Davis, Harrison B; Blue, Jeffrey T

    2015-01-01

    The purpose of this work was to investigate the susceptibility of an aluminum adjuvant and an aluminum-adjuvanted native outer membrane vesicle (nOMV) vaccine formulation to freeze/thaw-induced agglomeration using static light scattering and micro-flow Imaging analysis; and to evaluate the use of propylene glycol as a vaccine formulation excipient by which freeze/thaw-induced agglomeration of a nOMV vaccine formulation could be mitigated. Our results indicate that including 7% v/v propylene glycol in an nOMV containing aluminum adjuvanted vaccine formulation, mitigates freeze/thaw-induced agglomeration. We evaluated the effect of freeze-thawing on an aluminum adjuvant and an aluminum adjuvanted native outer membrane vesicle (nOMV) vaccine formulation. Specifically, we characterized the freeze/thaw-induced agglomeration through the use of static light scattering, micro-flow imaging, and cryo-electron microscopy analysis. Further, we evaluated the use of 0-9% v/v propylene glycol as an excipient which could be included in the formulation for the purpose of mitigating the agglomeration induced by freeze/thaw. The results indicate that using 7% v/v propylene glycol as a formulation excipient is effective at mitigating agglomeration of the nOMV vaccine formulation, otherwise induced by freeze-thawing. © PDA, Inc. 2015.

  15. KENNEDY SPACE CENTER, FLA. - One of the world’s highest performing visual film analysis systems, developed to review and analyze previous shuttle flight data (shown here) in preparation for the shuttle fleet’s return to flight, is being used today for another purpose. NASA has permitted its use in helping to analyze a film that shows a recent kidnapping in progress in Florida. The system, developed by NASA, United Space Alliance (USA) and Silicon Graphics Inc., allows multiple-person collaboration, highly detailed manipulation and evaluation of specific imagery. The system is housed in the Image Analysis Facility inside the Vehicle Assembly Building. [Photo taken Aug. 15, 2003, courtesy of Terry Wallace, SGI

    NASA Image and Video Library

    2004-02-04

    KENNEDY SPACE CENTER, FLA. - One of the world’s highest performing visual film analysis systems, developed to review and analyze previous shuttle flight data (shown here) in preparation for the shuttle fleet’s return to flight, is being used today for another purpose. NASA has permitted its use in helping to analyze a film that shows a recent kidnapping in progress in Florida. The system, developed by NASA, United Space Alliance (USA) and Silicon Graphics Inc., allows multiple-person collaboration, highly detailed manipulation and evaluation of specific imagery. The system is housed in the Image Analysis Facility inside the Vehicle Assembly Building. [Photo taken Aug. 15, 2003, courtesy of Terry Wallace, SGI

  16. Dynamic Imaging of the Eye, Optic Nerve, and Extraocular Muscles With Golden Angle Radial MRI

    PubMed Central

    Smith, David S.; Smith, Alex K.; Welch, E. Brian; Smith, Seth A.

    2017-01-01

    Purpose The eye and its accessory structures, the optic nerve and the extraocular muscles, form a complex dynamic system. In vivo magnetic resonance imaging (MRI) of this system in motion can have substantial benefits in understanding oculomotor functioning in health and disease, but has been restricted to date to imaging of static gazes only. The purpose of this work was to develop a technique to image the eye and its accessory visual structures in motion. Methods Dynamic imaging of the eye was developed on a 3-Tesla MRI scanner, based on a golden angle radial sequence that allows freely selectable frame-rate and temporal-span image reconstructions from the same acquired data set. Retrospective image reconstructions at a chosen frame rate of 57 ms per image yielded high-quality in vivo movies of various eye motion tasks performed in the scanner. Motion analysis was performed for a left–right version task where motion paths, lengths, and strains/globe angle of the medial and lateral extraocular muscles and the optic nerves were estimated. Results Offline image reconstructions resulted in dynamic images of bilateral visual structures of healthy adults in only ∼15-s imaging time. Qualitative and quantitative analyses of the motion enabled estimation of trajectories, lengths, and strains on the optic nerves and extraocular muscles at very high frame rates of ∼18 frames/s. Conclusions This work presents an MRI technique that enables high-frame-rate dynamic imaging of the eyes and orbital structures. The presented sequence has the potential to be used in furthering the understanding of oculomotor mechanics in vivo, both in health and disease. PMID:28813574

  17. Driver drowsiness detection using ANN image processing

    NASA Astrophysics Data System (ADS)

    Vesselenyi, T.; Moca, S.; Rus, A.; Mitran, T.; Tătaru, B.

    2017-10-01

    The paper presents a study regarding the possibility to develop a drowsiness detection system for car drivers based on three types of methods: EEG and EOG signal processing and driver image analysis. In previous works the authors have described the researches on the first two methods. In this paper the authors have studied the possibility to detect the drowsy or alert state of the driver based on the images taken during driving and by analyzing the state of the driver’s eyes: opened, half-opened and closed. For this purpose two kinds of artificial neural networks were employed: a 1 hidden layer network and an autoencoder network.

  18. A new quantitative evaluation method for age-related changes of individual pigmented spots in facial skin.

    PubMed

    Kikuchi, K; Masuda, Y; Yamashita, T; Sato, K; Katagiri, C; Hirao, T; Mizokami, Y; Yaguchi, H

    2016-08-01

    Facial skin pigmentation is one of the most prominent visible features of skin aging and often affects perception of health and beauty. To date, facial pigmentation has been evaluated using various image analysis methods developed for the cosmetic and esthetic fields. However, existing methods cannot provide precise information on pigmented spots, such as variations in size, color shade, and distribution pattern. The purpose of this study is the development of image evaluation methods to analyze individual pigmented spots and acquire detailed information on their age-related changes. To characterize the individual pigmented spots within a cheek image, we established a simple object-counting algorithm. First, we captured cheek images using an original imaging system equipped with an illumination unit and a high-resolution digital camera. The acquired images were converted into melanin concentration images using compensation formulae. Next, the melanin images were converted into binary images. The binary images were then subjected to noise reduction. Finally, we calculated parameters such as the melanin concentration, quantity, and size of individual pigmented spots using a connected-components labeling algorithm, which assigns a unique label to each separate group of connected pixels. The cheek image analysis was evaluated on 643 female Japanese subjects. We confirmed that the proposed method was sufficiently sensitive to measure the melanin concentration, and the numbers and sizes of individual pigmented spots through manual evaluation of the cheek images. The image analysis results for the 643 Japanese women indicated clear relationships between age and the changes in the pigmented spots. We developed a new quantitative evaluation method for individual pigmented spots in facial skin. This method facilitates the analysis of the characteristics of various pigmented facial spots and is directly applicable to the fields of dermatology, pharmacology, and esthetic cosmetology. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Empirical orthogonal function analysis of cloud-containing coastal zone color scanner images of northeastern North American coastal waters

    NASA Technical Reports Server (NTRS)

    Eslinger, David L.; O'Brien, James J.; Iverson, Richard L.

    1989-01-01

    Empirical-orthogonal-function (EOF) analyses were carried out on 36 images of the Mid-Atlantic Bight and the Gulf of Maine, obtained by the CZCS aboard Nimbus 7 for the time period from February 28 through July 9, 1979, with the purpose of determining pigment concentrations in coastal waters. The EOF procedure was modified so as to include images with significant portions of data missing due to cloud obstruction, making it possible to estimate pigment values in areas beneath clouds. The results of image analyses explained observed variances in pigment concentrations and showed a south-to-north pattern corresponding to an April Mid-Atlantic Bight bloom and a June bloom over Nantucket Shoals and Platts Bank.

  20. The evaluation of single-view and multi-view fusion 3D echocardiography using image-driven segmentation and tracking.

    PubMed

    Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary

    2011-08-01

    Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. MO-FG-202-06: Improving the Performance of Gamma Analysis QA with Radiomics- Based Image Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wootton, L; Nyflot, M; Ford, E

    2016-06-15

    Purpose: The use of gamma analysis for IMRT quality assurance has well-known limitations. Traditionally, a simple thresholding technique is used to evaluated passing criteria. However, like any image the gamma distribution is rich in information which thresholding mostly discards. We therefore propose a novel method of analyzing gamma images that uses quantitative image features borrowed from radiomics, with the goal of improving error detection. Methods: 368 gamma images were generated from 184 clinical IMRT beams. For each beam the dose to a phantom was measured with EPID dosimetry and compared to the TPS dose calculated with and without normally distributedmore » (2mm sigma) errors in MLC positions. The magnitude of 17 intensity histogram and size-zone radiomic features were derived from each image. The features that differed most significantly between image sets were determined with ROC analysis. A linear machine-learning model was trained on these features to classify images as with or without errors on 180 gamma images.The model was then applied to an independent validation set of 188 additional gamma distributions, half with and half without errors. Results: The most significant features for detecting errors were histogram kurtosis (p=0.007) and three size-zone metrics (p<1e-6 for each). The sizezone metrics detected clusters of high gamma-value pixels under mispositioned MLCs. The model applied to the validation set had an AUC of 0.8, compared to 0.56 for traditional gamma analysis with the decision threshold restricted to 98% or less. Conclusion: A radiomics-based image analysis method was developed that is more effective in detecting error than traditional gamma analysis. Though the pilot study here considers only MLC position errors, radiomics-based methods for other error types are being developed, which may provide better error detection and useful information on the source of detected errors. This work was partially supported by a grant from the Agency for Healthcare Research and Quality, grant number R18 HS022244-01.« less

  2. A portable low-cost long-term live-cell imaging platform for biomedical research and education.

    PubMed

    Walzik, Maria P; Vollmar, Verena; Lachnit, Theresa; Dietz, Helmut; Haug, Susanne; Bachmann, Holger; Fath, Moritz; Aschenbrenner, Daniel; Abolpour Mofrad, Sepideh; Friedrich, Oliver; Gilbert, Daniel F

    2015-02-15

    Time-resolved visualization and analysis of slow dynamic processes in living cells has revolutionized many aspects of in vitro cellular studies. However, existing technology applied to time-resolved live-cell microscopy is often immobile, costly and requires a high level of skill to use and maintain. These factors limit its utility to field research and educational purposes. The recent availability of rapid prototyping technology makes it possible to quickly and easily engineer purpose-built alternatives to conventional research infrastructure which are low-cost and user-friendly. In this paper we describe the prototype of a fully automated low-cost, portable live-cell imaging system for time-resolved label-free visualization of dynamic processes in living cells. The device is light-weight (3.6 kg), small (22 × 22 × 22 cm) and extremely low-cost (<€1250). We demonstrate its potential for biomedical use by long-term imaging of recombinant HEK293 cells at varying culture conditions and validate its ability to generate time-resolved data of high quality allowing for analysis of time-dependent processes in living cells. While this work focuses on long-term imaging of mammalian cells, the presented technology could also be adapted for use with other biological specimen and provides a general example of rapidly prototyped low-cost biosensor technology for application in life sciences and education. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Changes of the water-holding capacity and microstructure of panga and tilapia surimi gels using different stabilizers and processing methods.

    PubMed

    Filomena-Ambrosio, Annamaria; Quintanilla-Carvajal, María Ximena; Ana-Puig; Hernando, Isabel; Hernández-Carrión, María; Sotelo-Díaz, Indira

    2016-01-01

    Surimi gel is a food product traditionally manufactured from marine species; it has functional features including a specific texture and a high protein concentration. The objective of this study was to evaluate and compare the effect of the ultrasound extraction protein method and different stabilizers on the water-holding capacity (WHC), texture, and microstructure of surimi from panga and tilapia to potentially increase the value of these species. For this purpose, WHC was determined and texture profile analysis, scanning electron microscopy, and texture image analysis were carried out. The results showed that the ultrasound method and the sodium citrate can be used to obtain surimi gels from panga and tilapia with optimal textural properties such as the hardness and chewiness. Moreover, image analysis is recommended as a quantitative and non-invasive technique to evaluate the microstructure and texture image properties of surimis prepared using different processing methods and stabilizers. © The Author(s) 2015.

  4. Image Analysis Technique for Material Behavior Evaluation in Civil Structures

    PubMed Central

    Moretti, Michele; Rossi, Gianluca

    2017-01-01

    The article presents a hybrid monitoring technique for the measurement of the deformation field. The goal is to obtain information about crack propagation in existing structures, for the purpose of monitoring their state of health. The measurement technique is based on the capture and analysis of a digital image set. Special markers were used on the surface of the structures that can be removed without damaging existing structures as the historical masonry. The digital image analysis was done using software specifically designed in Matlab to follow the tracking of the markers and determine the evolution of the deformation state. The method can be used in any type of structure but is particularly suitable when it is necessary not to damage the surface of structures. A series of experiments carried out on masonry walls of the Oliverian Museum (Pesaro, Italy) and Palazzo Silvi (Perugia, Italy) have allowed the validation of the procedure elaborated by comparing the results with those derived from traditional measuring techniques. PMID:28773129

  5. Image Analysis Technique for Material Behavior Evaluation in Civil Structures.

    PubMed

    Speranzini, Emanuela; Marsili, Roberto; Moretti, Michele; Rossi, Gianluca

    2017-07-08

    The article presents a hybrid monitoring technique for the measurement of the deformation field. The goal is to obtain information about crack propagation in existing structures, for the purpose of monitoring their state of health. The measurement technique is based on the capture and analysis of a digital image set. Special markers were used on the surface of the structures that can be removed without damaging existing structures as the historical masonry. The digital image analysis was done using software specifically designed in Matlab to follow the tracking of the markers and determine the evolution of the deformation state. The method can be used in any type of structure but is particularly suitable when it is necessary not to damage the surface of structures. A series of experiments carried out on masonry walls of the Oliverian Museum (Pesaro, Italy) and Palazzo Silvi (Perugia, Italy) have allowed the validation of the procedure elaborated by comparing the results with those derived from traditional measuring techniques.

  6. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    NASA Astrophysics Data System (ADS)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  7. Simple and inexpensive hardware and software method to measure volume changes in Xenopus oocytes expressing aquaporins.

    PubMed

    Dorr, Ricardo; Ozu, Marcelo; Parisi, Mario

    2007-04-15

    Water channels (aquaporins) family members have been identified in central nervous system cells. A classic method to measure membrane water permeability and its regulation is to capture and analyse images of Xenopus laevis oocytes expressing them. Laboratories dedicated to the analysis of motion images usually have powerful equipment valued in thousands of dollars. However, some scientists consider that new approaches are needed to reduce costs in scientific labs, especially in developing countries. The objective of this work is to share a very low-cost hardware and software setup based on a well-selected webcam, a hand-made adapter to a microscope and the use of free software to measure membrane water permeability in Xenopus oocytes. One of the main purposes of this setup is to maintain a high level of quality in images obtained at brief intervals (shorter than 70 ms). The presented setup helps to economize without sacrificing image analysis requirements.

  8. Computer-based route-definition system for peripheral bronchoscopy.

    PubMed

    Graham, Michael W; Gibbs, Jason D; Higgins, William E

    2012-04-01

    Multi-detector computed tomography (MDCT) scanners produce high-resolution images of the chest. Given a patient's MDCT scan, a physician can use an image-guided intervention system to first plan and later perform bronchoscopy to diagnostic sites situated deep in the lung periphery. An accurate definition of complete routes through the airway tree leading to the diagnostic sites, however, is vital for avoiding navigation errors during image-guided bronchoscopy. We present a system for the robust definition of complete airway routes suitable for image-guided bronchoscopy. The system incorporates both automatic and semiautomatic MDCT analysis methods for this purpose. Using an intuitive graphical user interface, the user invokes automatic analysis on a patient's MDCT scan to produce a series of preliminary routes. Next, the user visually inspects each route and quickly corrects the observed route defects using the built-in semiautomatic methods. Application of the system to a human study for the planning and guidance of peripheral bronchoscopy demonstrates the efficacy of the system.

  9. Earth mapping - aerial or satellite imagery comparative analysis

    NASA Astrophysics Data System (ADS)

    Fotev, Svetlin; Jordanov, Dimitar; Lukarski, Hristo

    Nowadays, solving the tasks for revision of existing map products and creation of new maps requires making a choice of the land cover image source. The issue of the effectiveness and cost of the usage of aerial mapping systems versus the efficiency and cost of very-high resolution satellite imagery is topical [1, 2, 3, 4]. The price of any remotely sensed image depends on the product (panchromatic or multispectral), resolution, processing level, scale, urgency of task and on whether the needed image is available in the archive or has to be requested. The purpose of the present work is: to make a comparative analysis between the two approaches for mapping the Earth having in mind two parameters: quality and cost. To suggest an approach for selection of the map information sources - airplane-based or spacecraft-based imaging systems with very-high spatial resolution. Two cases are considered: area that equals approximately one satellite scene and area that equals approximately the territory of Bulgaria.

  10. Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy

    PubMed Central

    Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki

    2013-01-01

    We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787

  11. Automatic Segmenting Structures in MRI's Based on Texture Analysis and Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Kaur, Mandeep; Rattan, Munish; Singh, Pushpinder

    2017-12-01

    The purpose of this paper is to present the variational method for geometric contours which helps the level set function remain close to the sign distance function, therefor it remove the need of expensive re-initialization procedure and thus, level set method is applied on magnetic resonance images (MRI) to track the irregularities in them as medical imaging plays a substantial part in the treatment, therapy and diagnosis of various organs, tumors and various abnormalities. It favors the patient with more speedy and decisive disease controlling with lesser side effects. The geometrical shape, the tumor's size and tissue's abnormal growth can be calculated by the segmentation of that particular image. It is still a great challenge for the researchers to tackle with an automatic segmentation in the medical imaging. Based on the texture analysis, different images are processed by optimization of level set segmentation. Traditionally, optimization was manual for every image where each parameter is selected one after another. By applying fuzzy logic, the segmentation of image is correlated based on texture features, to make it automatic and more effective. There is no initialization of parameters and it works like an intelligent system. It segments the different MRI images without tuning the level set parameters and give optimized results for all MRI's.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shimizu, Y; Yoon, Y; Iwase, K

    Purpose: We are trying to develop an image-searching technique to identify misfiled images in a picture archiving and communication system (PACS) server by using five biological fingerprints: the whole lung field, cardiac shadow, superior mediastinum, lung apex, and right lower lung. Each biological fingerprint in a chest radiograph includes distinctive anatomical structures to identify misfiled images. The whole lung field was less effective for evaluating the similarity between two images than the other biological fingerprints. This was mainly due to the variation in the positioning for chest radiographs. The purpose of this study is to develop new biological fingerprints thatmore » could reduce influence of differences in the positioning for chest radiography. Methods: Two hundred patients were selected randomly from our database (36,212 patients). These patients had two images each (current and previous images). Current images were used as the misfiled images in this study. A circumscribed rectangular area of the lung and the upper half of the rectangle were selected automatically as new biological fingerprints. These biological fingerprints were matched to all previous images in the database. The degrees of similarity between the two images were calculated for the same and different patients. The usefulness of new the biological fingerprints for automated patient recognition was examined in terms of receiver operating characteristic (ROC) analysis. Results: Area under the ROC curves (AUCs) for the circumscribed rectangle of the lung, upper half of the rectangle, and whole lung field were 0.980, 0.994, and 0.950, respectively. The new biological fingerprints showed better performance in identifying the patients correctly than the whole lung field. Conclusion: We have developed new biological fingerprints: circumscribed rectangle of the lung and upper half of the rectangle. These new biological fingerprints would be useful for automated patient identification system because they are less affected by positioning differences during imaging.« less

  13. SU-E-J-237: Image Feature Based DRR and Portal Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, X; Chang, J

    Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thusmore » the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.« less

  14. SP mountain data analysis

    NASA Technical Reports Server (NTRS)

    Rawson, R. F.; Hamilton, R. E.; Liskow, C. L.; Dias, A. R.; Jackson, P. L.

    1981-01-01

    An analysis of synthetic aperture radar data of SP Mountain was undertaken to demonstrate the use of digital image processing techniques to aid in geologic interpretation of SAR data. These data were collected with the ERIM X- and L-band airborne SAR using like- and cross-polarizations. The resulting signal films were used to produce computer compatible tapes, from which four-channel imagery was generated. Slant range-to-ground range and range-azimuth-scale corrections were made in order to facilitate image registration; intensity corrections were also made. Manual interpretation of the imagery showed that L-band represented the geology of the area better than X-band. Several differences between the various images were also noted. Further digital analysis of the corrected data was done for enhancement purposes. This analysis included application of an MSS differencing routine and development of a routine for removal of relief displacement. It was found that accurate registration of the SAR channels is critical to the effectiveness of the differencing routine. Use of the relief displacement algorithm on the SP Mountain data demonstrated the feasibility of the technique.

  15. Nonlinear Photonic Systems for V- and W-Band Antenna Remoting Applications

    DTIC Science & Technology

    2016-10-22

    for commercial, academic, and military purposes delivering microwaves through fibers to remote areas for wireless sensing , imaging, and detection...academic, and military purposes, which use optical carriers to deliver microwave signals to remote areas for wireless sensing , imaging, and...and military purposes, which use optical carriers to deliver microwave signals to remote areas for wireless sensing , imaging, and detection

  16. Evaluation of image quality in terahertz pulsed imaging using test objects.

    PubMed

    Fitzgerald, A J; Berry, E; Miles, R E; Zinovev, N N; Smith, M A; Chamberlain, J M

    2002-11-07

    As with other imaging modalities, the performance of terahertz (THz) imaging systems is limited by factors of spatial resolution, contrast and noise. The purpose of this paper is to introduce test objects and image analysis methods to evaluate and compare THz image quality in a quantitative and objective way, so that alternative terahertz imaging system configurations and acquisition techniques can be compared, and the range of image parameters can be assessed. Two test objects were designed and manufactured, one to determine the modulation transfer functions (MTF) and the other to derive image signal to noise ratio (SNR) at a range of contrasts. As expected the higher THz frequencies had larger MTFs, and better spatial resolution as determined by the spatial frequency at which the MTF dropped below the 20% threshold. Image SNR was compared for time domain and frequency domain image parameters and time delay based images consistently demonstrated higher SNR than intensity based parameters such as relative transmittance because the latter are more strongly affected by the sources of noise in the THz system such as laser fluctuations and detector shot noise.

  17. Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Daniel, Jennifer C., E-mail: jennifer.odaniel@duke.edu; Yin, Fang-Fang

    Purpose: To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. Methods and Materials: A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Results: Although the failure severity was greatest for daily imaging QA (imaging vsmore » treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Conclusions: Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number.« less

  18. GrinLine identification using digital imaging and Adobe Photoshop.

    PubMed

    Bollinger, Susan A; Brumit, Paula C; Schrader, Bruce A; Senn, David R

    2009-03-01

    The purpose of this study was to outline a method by which an antemortem photograph of a victim can be critically compared with a postmortem photograph in an effort to facilitate the identification process. Ten subjects, between 27 and 55 years old provided historical pictures of themselves exhibiting a broad smile showing anterior teeth to some extent (a grin). These photos were termed "antemortem" for the purpose of the study. A digital camera was used to take a current photo of each subject's grin. These photos represented the "postmortem" images. A single subject's "postmortem" photo set was randomly selected to be the "unknown victim." These combined data of the unknown and the 10 antemortem subjects were digitally stored and, using Adobe Photoshop software, the images were sized and oriented for comparative analysis. The goal was to devise a technique that could facilitate the accurate determination of which "antemortem" subject was the "unknown." The generation of antemortem digital overlays of the teeth visible in a grin and the comparison of those overlays to the images of the postmortem dentition is the foundation of the technique. The comparisons made using the GrinLine Identification Technique may assist medical examiners and coroners in making identifications or exclusions.

  19. Multipurpose, dual-mode imaging in the 3-5 μm range (MWIR) for artwork diagnostics: A systematic approach

    NASA Astrophysics Data System (ADS)

    Daffara, Claudia; Parisotto, Simone; Ambrosini, Dario

    2018-05-01

    We present a multi-purpose, dual-mode imaging method in the Mid-Wavelength Infrared (MWIR) range (from 3 μm to 5 μm) for a more efficient nondestructive analysis of artworks. Using a setup based on a MWIR thermal camera and multiple radiation sources, two radiometric image datasets are acquired in different acquisition modalities, the image in quasi-reflectance mode (TQR) and the thermal sequence in emission mode. Here, the advantages are: the complementarity of the information; the use of the quasi-reflectance map for calculating the emissivity map; the use of TQR map for a referentiation to the visible of the thermographic images. The concept of the method is presented, the practical feasibility is demonstrated through a custom imaging setup, the potentiality for the nondestructive analysis is shown on a notable application to cultural heritage. The method has been used as experimental tool in support of the restoration of the mural painting "Monocromo" by Leonardo da Vinci. Feedback from the operators and a comparison with some conventional diagnostic techniques is also given to underline the novelty and potentiality of the method.

  20. Study on recognition technology of complementary image

    NASA Astrophysics Data System (ADS)

    Liu, Chengxiang; Hu, Xuejuan; Jian, Yaobo; Zhang, Li

    2006-11-01

    Complementation image is often used as a guard technology in the trademark and paper currency. The key point of recognizing this kind of images is judging the complementary effect of complementation printing. The perspective images are usually not clear and legible, so it is difficult to recognize them. In this paper, a new method is proposed. Firstly, capture the image by reflex. Secondly, find the same norm to man-made pair printing. Lastly, judge the true and false of paper currency by the complementary effect of complementation printing. This is the purpose of inspecting the false. Theoretic analysis and simulation results reveal that the effect of man-made pair printing is good, the method has advantages such as simplicity, high calculating speed, and good robust to different RMB. The experiment results reveal that the conclusion is reasonable, and demonstrates that this approach is effective.

  1. Sensitometric and image analysis of T-grain film.

    PubMed

    Thunthy, K H; Weinberg, R

    1986-08-01

    The new Kodak T-grain film is the result of a new technology that makes fast films with high image resolution. The purpose of the investigation was to determine the sensitometric properties and image quality of a T-grain film (T-Mat G) and also to compare this film with a green-sensitive orthochromatic film (Ortho G) and a blue-sensitive film (XRP). The criteria for film evaluation were relative speed, average contrast, exposure latitude, and image resolution. The results showed that the T-Mat G film is twice as fast as the X-Omat RP film and, one and one-third times as fast as the Ortho G film. T-Mat G also produces high resolution and high contrast. This is contrary to the widely held notion that speed is inversely proportional to image quality.

  2. The Cognitive Content of the World of Symbols in a Language

    ERIC Educational Resources Information Center

    Zhirenov, Sayan A.; Satemirova, Darikha A.; Ibraeva, Aizat D.; Tanzharikova, Alua V.

    2016-01-01

    The purpose of this study is to analyze the meaning of symbols, the symbolic world in linguistics. Using the methods of observation, analysis, synthesis and interpretation, the author determines the category of symbols in linguistic-cognitive research. The study delineates connection between linguistic image of the universe and symbolic categories…

  3. Development of the Structured Problem Posing Skills and Using Metaphoric Perceptions

    ERIC Educational Resources Information Center

    Arikan, Elif Esra; Unal, Hasan

    2014-01-01

    The purpose of this study was to introduce problem posing activity to third grade students who have never met before. This study was also explored students' metaphorical images on problem posing process. Participants were from Public school in Marmara Region in Turkey. Data was analyzed both qualitatively (content analysis for difficulty and…

  4. Lived Experience of Women Suffering from Vitiligo: A Phenomenological Study

    ERIC Educational Resources Information Center

    Borimnejad, Leili; Yekta, Zohreh Parsa; Nasrabadi, Alireza Nikbakht

    2006-01-01

    Vitiligo is a chronic skin disease, which through change of appearance and body image, exerts a devastating effect on people, especially women. The objective of this study is to explore lived experience of women with Vitiligo by the hermeneutic phenomenology method. The purposive sample consisted of 16 Iranian women. Data analysis followed…

  5. Teacher Argumentation in the Secondary Science Classroom: Images of Two Modes of Scientific Inquiry

    ERIC Educational Resources Information Center

    Gray, Ron E.

    2009-01-01

    The purpose of this exploratory study was to examine scientific arguments constructed by secondary science teachers during instruction. The analysis focused on how arguments constructed by teachers differed based on the mode of inquiry underlying the topic. Specifically, how did the structure and content of arguments differ between experimentally…

  6. Advanced Spectroscopic and Thermal Imaging Instrumentation for Shock Tube and Ballistic Range Facilities

    DTIC Science & Technology

    2010-04-01

    the development process, increase its quality and reduce development time through automation of synthesis, analysis or verification. For this purpose...made of time-non-deterministic systems, improving efficiency and reducing complexity of formal analysis . We also show how our theory relates to, and...of the most recent investigations for Earth and Mars atmospheres will be discussed in the following sections. 2.4.1 Earth: lunar return NASA’s

  7. Computational assessment of mammography accreditation phantom images and correlation with human observer analysis

    NASA Astrophysics Data System (ADS)

    Barufaldi, Bruno; Lau, Kristen C.; Schiabel, Homero; Maidment, D. A.

    2015-03-01

    Routine performance of basic test procedures and dose measurements are essential for assuring high quality of mammograms. International guidelines recommend that breast care providers ascertain that mammography systems produce a constant high quality image, using as low a radiation dose as is reasonably achievable. The main purpose of this research is to develop a framework to monitor radiation dose and image quality in a mixed breast screening and diagnostic imaging environment using an automated tracking system. This study presents a module of this framework, consisting of a computerized system to measure the image quality of the American College of Radiology mammography accreditation phantom. The methods developed combine correlation approaches, matched filters, and data mining techniques. These methods have been used to analyze radiological images of the accreditation phantom. The classification of structures of interest is based upon reports produced by four trained readers. As previously reported, human observers demonstrate great variation in their analysis due to the subjectivity of human visual inspection. The software tool was trained with three sets of 60 phantom images in order to generate decision trees using the software WEKA (Waikato Environment for Knowledge Analysis). When tested with 240 images during the classification step, the tool correctly classified 88%, 99%, and 98%, of fibers, speck groups and masses, respectively. The variation between the computer classification and human reading was comparable to the variation between human readers. This computerized system not only automates the quality control procedure in mammography, but also decreases the subjectivity in the expert evaluation of the phantom images.

  8. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl; Tijssen, Rob H.N.; Senneville, Baudouin D. de

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was foundmore » to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.« less

  9. MR imaging with i.v. superparamagnetic iron oxide: efficacy in the detection of focal hepatic lesions.

    PubMed

    Winter, T C; Freeny, P C; Nghiem, H V; Mack, L A; Patten, R M; Thomas, C R; Elliott, S

    1993-12-01

    The purpose of this study was to evaluate the efficacy of superparmagnetic iron oxide (SPIO) in the detection of focal hepatic lesions on MR images. The study included 21 patients with 115 focal hepatic lesions and eight patients without focal hepatic lesions. T1- and T2-weighted MR images were obtained at 1.5 T before and 60 min after the end of injection of an SPIO agent. Contrast-enhanced CT scans were obtained in all patients within 10 days after MR imaging. The effect of SPIO on the signal intensity of the liver and spleen was assessed by using quantitative analysis of the region of interest. Efficacy was evaluated by using multiple criteria and unenhanced and SPIO-enhanced images. Evaluations included subjective assessment of image quality, counting the number of lesions detected, and statistical analysis of quantitative changes in the signal intensity of lesions and of normal liver. By all criteria, SPIO-enhanced T2-weighted MR images were superior to unenhanced T2-weighted images and to contrast-enhanced CT scans. Conversely, by all criteria, SPIO-enhanced T1-weighted MR images were worse than unenhanced T1-weighted images and contrast-enhanced CT scans. The mean lesion-to-liver contrast on T2-weighted images was 317% on unenhanced images and 1745% on SPIO-enhanced images. For T1-weighted, the mean contrast was 26% on unenhanced images and 18% on SPIO-enhanced images. SPIO is an efficacious contrast agent for the detection of focal hepatic lesions when T2-weighted MR images are used.

  10. Accuracy of Presurgical Functional MR Imaging for Language Mapping of Brain Tumors: A Systematic Review and Meta-Analysis.

    PubMed

    Weng, Hsu-Huei; Noll, Kyle R; Johnson, Jason M; Prabhu, Sujit S; Tsai, Yuan-Hsiung; Chang, Sheng-Wei; Huang, Yen-Chu; Lee, Jiann-Der; Yang, Jen-Tsung; Yang, Cheng-Ta; Tsai, Ying-Huang; Yang, Chun-Yuh; Hazle, John D; Schomer, Donald F; Liu, Ho-Ling

    2018-02-01

    Purpose To compare functional magnetic resonance (MR) imaging for language mapping (hereafter, language functional MR imaging) with direct cortical stimulation (DCS) in patients with brain tumors and to assess factors associated with its accuracy. Materials and Methods PubMed/MEDLINE and related databases were searched for research articles published between January 2000 and September 2016. Findings were pooled by using bivariate random-effects and hierarchic summary receiver operating characteristic curve models. Meta-regression and subgroup analyses were performed to evaluate whether publication year, functional MR imaging paradigm, magnetic field strength, statistical threshold, and analysis software affected classification accuracy. Results Ten articles with a total of 214 patients were included in the analysis. On a per-patient basis, the pooled sensitivity and specificity of functional MR imaging was 44% (95% confidence interval [CI]: 14%, 78%) and 80% (95% CI: 54%, 93%), respectively. On a per-tag basis (ie, each DCS stimulation site or "tag" was considered a separate data point across all patients), the pooled sensitivity and specificity were 67% (95% CI: 51%, 80%) and 55% (95% CI: 25%, 82%), respectively. The per-tag analysis showed significantly higher sensitivity for studies with shorter functional MR imaging session times (P = .03) and relaxed statistical threshold (P = .05). Significantly higher specificity was found when expressive language task (P = .02), longer functional MR imaging session times (P < .01), visual presentation of stimuli (P = .04), and stringent statistical threshold (P = .01) were used. Conclusion Results of this study showed moderate accuracy of language functional MR imaging when compared with intraoperative DCS, and the included studies displayed significant methodologic heterogeneity. © RSNA, 2017 Online supplemental material is available for this article.

  11. Seismic imaging of post-glacial sediments - test study before Spitsbergen expedition

    NASA Astrophysics Data System (ADS)

    Szalas, Joanna; Grzyb, Jaroslaw; Majdanski, Mariusz

    2017-04-01

    This work presents results of the analysis of reflection seismic data acquired from testing area in central Poland. For this experiment we used total number of 147 vertical component seismic stations (DATA-CUBE and Reftek "Texan") with accelerated weight drop (PEG-40). The profile was 350 metres long. It is a part of pilot study for future research project on Spitsbergen. The purpose of the study is to recognise the characteristics of seismic response of post-glacial sediments in order to design the most adequate survey acquisition parameters and processing sequence for data from Spitsbergen. Multiple tests and comparisons have been performed to obtain the best possible quality of seismic image. In this research we examine the influence of receiver interval size, front mute application and surface wave attenuation attempts. Although seismic imaging is the main technique we are planning to support this analysis with additional data from traveltime tomography, MASW and other a priori information.

  12. Analysis of interstellar fragmentation structure based on IRAS images

    NASA Technical Reports Server (NTRS)

    Scalo, John M.

    1989-01-01

    The goal of this project was to develop new tools for the analysis of the structure of densely sampled maps of interstellar star-forming regions. A particular emphasis was on the recognition and characterization of nested hierarchical structure and fractal irregularity, and their relation to the level of star formation activity. The panoramic IRAS images provided data with the required range in spatial scale, greater than a factor of 100, and in column density, greater than a factor of 50. In order to construct a densely sampled column density map of a cloud complex which is both self-gravitating and not (yet?) stirred up much by star formation, a column density image of the Taurus region has been constructed from IRAS data. The primary drawback to using the IRAS data for this purpose is that it contains no velocity information, and the possible importance of projection effects must be kept in mind.

  13. Kinetic Analysis of Benign and Malignant Breast Lesions With Ultrafast Dynamic Contrast-Enhanced MRI: Comparison With Standard Kinetic Assessment.

    PubMed

    Abe, Hiroyuki; Mori, Naoko; Tsuchiya, Keiko; Schacht, David V; Pineda, Federico D; Jiang, Yulei; Karczmar, Gregory S

    2016-11-01

    The purposes of this study were to evaluate diagnostic parameters measured with ultrafast MRI acquisition and with standard acquisition and to compare diagnostic utility for differentiating benign from malignant lesions. Ultrafast acquisition is a high-temporal-resolution (7 seconds) imaging technique for obtaining 3D whole-breast images. The dynamic contrast-enhanced 3-T MRI protocol consists of an unenhanced standard and an ultrafast acquisition that includes eight contrast-enhanced ultrafast images and four standard images. Retrospective assessment was performed for 60 patients with 33 malignant and 29 benign lesions. A computer-aided detection system was used to obtain initial enhancement rate and signal enhancement ratio (SER) by means of identification of a voxel showing the highest signal intensity in the first phase of standard imaging. From the same voxel, the enhancement rate at each time point of the ultrafast acquisition and the AUC of the kinetic curve from zero to each time point of ultrafast imaging were obtained. There was a statistically significant difference between benign and malignant lesions in enhancement rate and kinetic AUC for ultrafast imaging and also in initial enhancement rate and SER for standard imaging. ROC analysis showed no significant differences between enhancement rate in ultrafast imaging and SER or initial enhancement rate in standard imaging. Ultrafast imaging is useful for discriminating benign from malignant lesions. The differential utility of ultrafast imaging is comparable to that of standard kinetic assessment in a shorter study time.

  14. Advanced GPR imaging of sedimentary features: integrated attribute analysis applied to sand dunes

    NASA Astrophysics Data System (ADS)

    Zhao, Wenke; Forte, Emanuele; Fontolan, Giorgio; Pipan, Michele

    2018-04-01

    We evaluate the applicability and the effectiveness of integrated GPR attribute analysis to image the internal sedimentary features of the Piscinas Dunes, SW Sardinia, Italy. The main objective is to explore the limits of GPR techniques to study sediment-bodies geometry and to provide a non-invasive high-resolution characterization of the different subsurface domains of dune architecture. On such purpose, we exploit the high-quality Piscinas data-set to extract and test different attributes of the GPR trace. Composite displays of multi-attributes related to amplitude, frequency, similarity and textural features are displayed with overlays and RGB mixed models. A multi-attribute comparative analysis is used to characterize different radar facies to better understand the characteristics of internal reflection patterns. The results demonstrate that the proposed integrated GPR attribute analysis can provide enhanced information about the spatial distribution of sediment bodies, allowing an enhanced and more constrained data interpretation.

  15. SU-E-J-275: Review - Computerized PET/CT Image Analysis in the Evaluation of Tumor Response to Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, W; Wang, J; Zhang, H

    Purpose: To review the literature in using computerized PET/CT image analysis for the evaluation of tumor response to therapy. Methods: We reviewed and summarized more than 100 papers that used computerized image analysis techniques for the evaluation of tumor response with PET/CT. This review mainly covered four aspects: image registration, tumor segmentation, image feature extraction, and response evaluation. Results: Although rigid image registration is straightforward, it has been shown to achieve good alignment between baseline and evaluation scans. Deformable image registration has been shown to improve the alignment when complex deformable distortions occur due to tumor shrinkage, weight loss ormore » gain, and motion. Many semi-automatic tumor segmentation methods have been developed on PET. A comparative study revealed benefits of high levels of user interaction with simultaneous visualization of CT images and PET gradients. On CT, semi-automatic methods have been developed for only tumors that show marked difference in CT attenuation between the tumor and the surrounding normal tissues. Quite a few multi-modality segmentation methods have been shown to improve accuracy compared to single-modality algorithms. Advanced PET image features considering spatial information, such as tumor volume, tumor shape, total glycolytic volume, histogram distance, and texture features have been found more informative than the traditional SUVmax for the prediction of tumor response. Advanced CT features, including volumetric, attenuation, morphologic, structure, and texture descriptors, have also been found advantage over the traditional RECIST and WHO criteria in certain tumor types. Predictive models based on machine learning technique have been constructed for correlating selected image features to response. These models showed improved performance compared to current methods using cutoff value of a single measurement for tumor response. Conclusion: This review showed that computerized PET/CT image analysis holds great potential to improve the accuracy in evaluation of tumor response. This work was supported in part by the National Cancer Institute Grant R01CA172638.« less

  16. Comparison of MR imaging sequences for liver and head and neck interventions: is there a single optimal sequence for all purposes?

    PubMed

    Boll, Daniel T; Lewin, Jonathan S; Duerk, Jeffrey L; Aschoff, Andrik J; Merkle, Elmar M

    2004-05-01

    To compare the appropriate pulse sequences for interventional device guidance during magnetic resonance (MR) imaging at 0.2 T and to evaluate the dependence of sequence selection on the anatomic region of the procedure. Using a C-arm 0.2 T system, four interventional MR sequences were applied in 23 liver cases and during MR-guided neck interventions in 13 patients. The imaging protocol consisted of: multislice turbo spin echo (TSE) T2w, sequential-slice fast imaging with steady precession (FISP), a time-reversed version of FISP (PSIF), and FISP with balanced gradients in all spatial directions (True-FISP) sequences. Vessel conspicuity was rated and contrast-to-noise ratio (CNR) was calculated for each sequence and a differential receiver operating characteristic was performed. Liver findings were detected in 96% using the TSE sequence. PSIF, FISP, and True-FISP imaging showed lesions in 91%, 61%, and 65%, respectively. The TSE sequence offered the best CNR, followed by PSIF imaging. Differential receiver operating characteristic analysis also rated TSE and PSIF to be the superior sequences. Lesions in the head and neck were detected in all cases by TSE and FISP, in 92% using True-FISP, and in 84% using PSIF. True-FISP offered the best CNR, followed by TSE imaging. Vessels appeared bright on FISP and True-FISP imaging and dark on the other sequences. In interventional MR imaging, no single sequence fits all purposes. Image guidance for interventional MR during liver procedures is best achieved by PSIF or TSE, whereas biopsies in the head and neck are best performed using FISP or True-FISP sequences.

  17. Quantitative Analysis of {sup 18}F-Fluorodeoxyglucose Positron Emission Tomography Identifies Novel Prognostic Imaging Biomarkers in Locally Advanced Pancreatic Cancer Patients Treated With Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yi; Global Institution for Collaborative Research and Education, Hokkaido University, Sapporo; Song, Jie

    Purpose: To identify prognostic biomarkers in pancreatic cancer using high-throughput quantitative image analysis. Methods and Materials: In this institutional review board–approved study, we retrospectively analyzed images and outcomes for 139 locally advanced pancreatic cancer patients treated with stereotactic body radiation therapy (SBRT). The overall population was split into a training cohort (n=90) and a validation cohort (n=49) according to the time of treatment. We extracted quantitative imaging characteristics from pre-SBRT {sup 18}F-fluorodeoxyglucose positron emission tomography, including statistical, morphologic, and texture features. A Cox proportional hazard regression model was built to predict overall survival (OS) in the training cohort using 162more » robust image features. To avoid over-fitting, we applied the elastic net to obtain a sparse set of image features, whose linear combination constitutes a prognostic imaging signature. Univariate and multivariate Cox regression analyses were used to evaluate the association with OS, and concordance index (CI) was used to evaluate the survival prediction accuracy. Results: The prognostic imaging signature included 7 features characterizing different tumor phenotypes, including shape, intensity, and texture. On the validation cohort, univariate analysis showed that this prognostic signature was significantly associated with OS (P=.002, hazard ratio 2.74), which improved upon conventional imaging predictors including tumor volume, maximum standardized uptake value, and total legion glycolysis (P=.018-.028, hazard ratio 1.51-1.57). On multivariate analysis, the proposed signature was the only significant prognostic index (P=.037, hazard ratio 3.72) when adjusted for conventional imaging and clinical factors (P=.123-.870, hazard ratio 0.53-1.30). In terms of CI, the proposed signature scored 0.66 and was significantly better than competing prognostic indices (CI 0.48-0.64, Wilcoxon rank sum test P<1e-6). Conclusion: Quantitative analysis identified novel {sup 18}F-fluorodeoxyglucose positron emission tomography image features that showed improved prognostic value over conventional imaging metrics. If validated in large, prospective cohorts, the new prognostic signature might be used to identify patients for individualized risk-adaptive therapy.« less

  18. Verification of the linac isocenter for stereotactic radiosurgery using cine-EPID imaging and arc delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowshanfarzad, Pejman; Sabet, Mahsheed; O' Connor, Daryl J.

    2011-07-15

    Purpose:Verification of the mechanical isocenter position is required as part of comprehensive quality assurance programs for stereotactic radiosurgery/radiotherapy (SRS/SRT) treatments. Several techniques have been proposed for this purpose but each of them has certain drawbacks. In this paper, a new efficient and more comprehensive method using cine-EPID images has been introduced for automatic verification of the isocenter with sufficient accuracy for stereotactic applications. Methods: Using a circular collimator fixed to the gantry head to define the field, EPID images of a Winston-Lutz phantom were acquired in cine-imaging mode during 360 deg. gantry rotations. A robust matlab code was developed tomore » analyze the data by finding the center of the field and the center of the ball bearing shadow in each image with sub-pixel accuracy. The distance between these two centers was determined for every image. The method was evaluated by comparison to results of a mechanical pointer and also by detection of a manual shift applied to the phantom position. The repeatability and reproducibility of the method were tested and it was also applied to detect couch and collimator wobble during rotation. Results:The accuracy of the algorithm was 0.03 {+-} 0.02 mm. The repeatability was less than 3 {mu}m and the reproducibility was less than 86 {mu}m. The time elapsed for the analysis of more than 100 cine images of Varian aS1000 and aS500 EPIDs were {approx}65 and 20 s, respectively. Processing of images taken in integrated mode took 0.1 s. The output of the analysis software is printable and shows the isocenter shifts as a function of angle in both in-plane and cross-plane directions. It gives warning messages where the shifts exceed the criteria for SRS/SRT and provides useful data for the necessary adjustments in the system including bearing system and/or room lasers. Conclusions: The comprehensive method introduced in this study uses cine-images, is highly accurate, fast, and independent of the observer. It tests all gantry angles and is suitable for pretreatment QA of the isocenter for stereotactic treatments.« less

  19. Super-Resolution of Plant Disease Images for the Acceleration of Image-based Phenotyping and Vigor Diagnosis in Agriculture.

    PubMed

    Yamamoto, Kyosuke; Togami, Takashi; Yamaguchi, Norio

    2017-11-06

    Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture-in cooperation with image processing technologies-for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis.

  20. Super-Resolution of Plant Disease Images for the Acceleration of Image-based Phenotyping and Vigor Diagnosis in Agriculture

    PubMed Central

    Togami, Takashi; Yamaguchi, Norio

    2017-01-01

    Unmanned aerial vehicles (UAVs or drones) are a very promising branch of technology, and they have been utilized in agriculture—in cooperation with image processing technologies—for phenotyping and vigor diagnosis. One of the problems in the utilization of UAVs for agricultural purposes is the limitation in flight time. It is necessary to fly at a high altitude to capture the maximum number of plants in the limited time available, but this reduces the spatial resolution of the captured images. In this study, we applied a super-resolution method to the low-resolution images of tomato diseases to recover detailed appearances, such as lesions on plant organs. We also conducted disease classification using high-resolution, low-resolution, and super-resolution images to evaluate the effectiveness of super-resolution methods in disease classification. Our results indicated that the super-resolution method outperformed conventional image scaling methods in spatial resolution enhancement of tomato disease images. The results of disease classification showed that the accuracy attained was also better by a large margin with super-resolution images than with low-resolution images. These results indicated that our approach not only recovered the information lost in low-resolution images, but also exerted a beneficial influence on further image analysis. The proposed approach will accelerate image-based phenotyping and vigor diagnosis in the field, because it not only saves time to capture images of a crop in a cultivation field but also secures the accuracy of these images for further analysis. PMID:29113104

  1. Graph-based layout analysis for PDF documents

    NASA Astrophysics Data System (ADS)

    Xu, Canhui; Tang, Zhi; Tao, Xin; Li, Yun; Shi, Cao

    2013-03-01

    To increase the flexibility and enrich the reading experience of e-book on small portable screens, a graph based method is proposed to perform layout analysis on Portable Document Format (PDF) documents. Digital born document has its inherent advantages like representing texts and fractional images in explicit form, which can be straightforwardly exploited. To integrate traditional image-based document analysis and the inherent meta-data provided by PDF parser, the page primitives including text, image and path elements are processed to produce text and non text layer for respective analysis. Graph-based method is developed in superpixel representation level, and page text elements corresponding to vertices are used to construct an undirected graph. Euclidean distance between adjacent vertices is applied in a top-down manner to cut the graph tree formed by Kruskal's algorithm. And edge orientation is then used in a bottom-up manner to extract text lines from each sub tree. On the other hand, non-textual objects are segmented by connected component analysis. For each segmented text and non-text composite, a 13-dimensional feature vector is extracted for labelling purpose. The experimental results on selected pages from PDF books are presented.

  2. Imaging services at the Paralympic Games London 2012: analysis of demand and distribution of workload.

    PubMed

    Bethapudi, Sarath; Campbell, Robert S D; Budgett, Richard; Willick, Stuart E; Van de Vliet, Peter

    2015-01-01

    Very little data have been published on medical imaging services at disability games. 7.9 million euros (£6.6 million, US$11 million) were invested in setting up radiology facilities within purpose built polyclinics at the London 2012 Olympic and Paralympic games. This paper details imaging services at the 2012 Paralympic Games. Data analysis on imaging at 2012 Olympics has been published in a separate paper. To analyse the workload on the polyclinics' radiology services, provided for the Paralympic athletes during the London 2012 Paralympic Games. Data were prospectively collected during the period of the Paralympic games from the Picture Archive Communications System (PACS) and the Radiological Information System (RIS). Data were correlated with the medical encounter database (ATOS). 655 imaging episodes were recorded, which comprised 38.8% (n=254) MRI, 33% (n=216) plain radiographs, 24% (n=157) ultrasound scans and 4.2% (n=28) CT scans. Investigations on the Paralympic athletes accounted for 65.2% of workload, with the remainder divided between Paralympic family and workforce. MRI was the most used imaging resource and CT was the least used imaging modality at the Paralympic village polyclinic. Analysis of demographic data provides a useful index for planning radiology infrastructure and manpower at future international competitions for athletes with a disability. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. Development and validation of technique for in-vivo 3D analysis of cranial bone graft survival

    NASA Astrophysics Data System (ADS)

    Bernstein, Mark P.; Caldwell, Curtis B.; Antonyshyn, Oleh M.; Ma, Karen; Cooper, Perry W.; Ehrlich, Lisa E.

    1997-05-01

    Bone autografts are routinely employed in the reconstruction of facial deformities resulting from trauma, tumor ablation or congenital malformations. The combined use of post- operative 3D CT and SPECT imaging provides a means for quantitative in vivo evaluation of bone graft volume and osteoblastic activity. The specific objectives of this study were: (1) Determine the reliability and accuracy of interactive computer-assisted analysis of bone graft volumes based on 3D CT scans; (2) Determine the error in CT/SPECT multimodality image registration; (3) Determine the error in SPECT/SPECT image registration; and (4) Determine the reliability and accuracy of CT-guided SPECT uptake measurements in cranial bone grafts. Five human cadaver heads served as anthropomorphic models for all experiments. Four cranial defects were created in each specimen with inlay and onlay split skull bone grafts and reconstructed to skull and malar recipient sites. To acquire all images, each specimen was CT scanned and coated with Technetium doped paint. For purposes of validation, skulls were landmarked with 1/16-inch ball-bearings and Indium. This study provides a new technique relating anatomy and physiology for the analysis of cranial bone graft survival.

  4. Factors influencing adolescent girls' sexual behavior: a secondary analysis of the 2011 youth risk behavior survey.

    PubMed

    Anatale, Katharine; Kelly, Sarah

    2015-03-01

    Adolescence is a tumultuous and challenging time period in life. Sexual risk behavior among adolescents is a widespread topic of interest in the current literature. Two common factors that influence increased sexual risk behavior are symptoms of depression and negative body image. The purpose of this study was to investigate the effect of body image and symptoms of depression upon sexual risk-taking in an adolescent female population. A secondary data analysis of the 2011 Youth Risk Behavior Survey (YRBS) was used to explore girls' sexual activity, body image, and mental health. There were 7,708 high-school girls who participated in this study. Three questions were used to represent the constructs under investigation. There were significant correlations between sexual activity, body image, and symptoms of depression; only symptoms of depression were significant predictors of both sexual activity and condom usage. Body image was a predictor of sexual activity, but not condom use. Our findings support previous studies that suggested that people with depressive symptoms were more likely to engage in risky sexual behaviors. Our study also supports the idea that a negative body image decreases sexual activity; however, other researchers have reported that negative body image leads to an increase in sexual activity.

  5. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  6. Classification of follicular lymphoma images: a holistic approach with symbol-based machine learning methods.

    PubMed

    Zorman, Milan; Sánchez de la Rosa, José Luis; Dinevski, Dejan

    2011-12-01

    It is not very often to see a symbol-based machine learning approach to be used for the purpose of image classification and recognition. In this paper we will present such an approach, which we first used on the follicular lymphoma images. Lymphoma is a broad term encompassing a variety of cancers of the lymphatic system. Lymphoma is differentiated by the type of cell that multiplies and how the cancer presents itself. It is very important to get an exact diagnosis regarding lymphoma and to determine the treatments that will be most effective for the patient's condition. Our work was focused on the identification of lymphomas by finding follicles in microscopy images provided by the Laboratory of Pathology in the University Hospital of Tenerife, Spain. We divided our work in two stages: in the first stage we did image pre-processing and feature extraction, and in the second stage we used different symbolic machine learning approaches for pixel classification. Symbolic machine learning approaches are often neglected when looking for image analysis tools. They are not only known for a very appropriate knowledge representation, but also claimed to lack computational power. The results we got are very promising and show that symbolic approaches can be successful in image analysis applications.

  7. Quantitative Analysis of the Effect of Iterative Reconstruction Using a Phantom: Determining the Appropriate Blending Percentage

    PubMed Central

    Kim, Hyun Gi; Lee, Young Han; Choi, Jin-Young; Park, Mi-Suk; Kim, Myeong-Jin; Kim, Ki Whang

    2015-01-01

    Purpose To investigate the optimal blending percentage of adaptive statistical iterative reconstruction (ASIR) in a reduced radiation dose while preserving a degree of image quality and texture that is similar to that of standard-dose computed tomography (CT). Materials and Methods The CT performance phantom was scanned with standard and dose reduction protocols including reduced mAs or kVp. Image quality parameters including noise, spatial, and low-contrast resolution, as well as image texture, were quantitatively evaluated after applying various blending percentages of ASIR. The optimal blending percentage of ASIR that preserved image quality and texture compared to standard dose CT was investigated in each radiation dose reduction protocol. Results As the percentage of ASIR increased, noise and spatial-resolution decreased, whereas low-contrast resolution increased. In the texture analysis, an increasing percentage of ASIR resulted in an increase of angular second moment, inverse difference moment, and correlation and in a decrease of contrast and entropy. The 20% and 40% dose reduction protocols with 20% and 40% ASIR blending, respectively, resulted in an optimal quality of images with preservation of the image texture. Conclusion Blending the 40% ASIR to the 40% reduced tube-current product can maximize radiation dose reduction and preserve adequate image quality and texture. PMID:25510772

  8. Effect of Chinese Herbal Medicine on Molecular Imaging of Neurological Disorders.

    PubMed

    Yao, Yao; Chen, Ting; Huang, Jing; Zhang, Hong; Tian, Mei

    2017-01-01

    Chinese herbal medicine has been used to treat a wide variety of neurological disorders including stroke, Alzheimer's disease, and Parkinson's disease. However, its mechanism behind the effectiveness remains unclear. Recently, molecular imaging technology has been applied for this purpose, since it can assess the cellular or molecular function in a living subject by using specific imaging probes and/or radioactive tracers, which enable efficient analysis and monitoring the therapeutic response repetitively. This chapter reviews the in vivo functional and metabolic changes after administration of Chinese herbal medicine in various neurological disorders and provides perspectives on the future evaluations of therapeutic response of Chinese herbal medicine. © 2017 Elsevier Inc. All rights reserved.

  9. Clinical evaluation of JPEG2000 compression for digital mammography

    NASA Astrophysics Data System (ADS)

    Sung, Min-Mo; Kim, Hee-Joung; Kim, Eun-Kyung; Kwak, Jin-Young; Yoo, Jae-Kyung; Yoo, Hyung-Sik

    2002-06-01

    Medical images, such as computed radiography (CR), and digital mammographic images will require large storage facilities and long transmission times for picture archiving and communications system (PACS) implementation. American College of Radiology and National Equipment Manufacturers Association (ACR/NEMA) group is planning to adopt a JPEG2000 compression algorithm in digital imaging and communications in medicine (DICOM) standard to better utilize medical images. The purpose of the study was to evaluate the compression ratios of JPEG2000 for digital mammographic images using peak signal-to-noise ratio (PSNR), receiver operating characteristic (ROC) analysis, and the t-test. The traditional statistical quality measures such as PSNR, which is a commonly used measure for the evaluation of reconstructed images, measures how the reconstructed image differs from the original by making pixel-by-pixel comparisons. The ability to accurately discriminate diseased cases from normal cases is evaluated using ROC curve analysis. ROC curves can be used to compare the diagnostic performance of two or more reconstructed images. The t test can be also used to evaluate the subjective image quality of reconstructed images. The results of the t test suggested that the possible compression ratios using JPEG2000 for digital mammographic images may be as much as 15:1 without visual loss or with preserving significant medical information at a confidence level of 99%, although both PSNR and ROC analyses suggest as much as 80:1 compression ratio can be achieved without affecting clinical diagnostic performance.

  10. Repeat analysis of intraoral digital imaging performed by undergraduate students using a complementary metal oxide semiconductor sensor: An institutional case study

    PubMed Central

    Rahman, Nur Liyana Abdul; Asri, Amiza Aqiela Ahmad; Othman, Noor Ilyani; Wan Mokhtar, Ilham

    2017-01-01

    Purpose This study was performed to quantify the repeat rate of imaging acquisitions based on different clinical examinations, and to assess the prevalence of error types in intraoral bitewing and periapical imaging using a digital complementary metal-oxide-semiconductor (CMOS) intraoral sensor. Materials and Methods A total of 8,030 intraoral images were retrospectively collected from 3 groups of undergraduate clinical dental students. The type of examination, stage of the procedure, and reasons for repetition were analysed and recorded. The repeat rate was calculated as the total number of repeated images divided by the total number of examinations. The weighted Cohen's kappa for inter- and intra-observer agreement was used after calibration and prior to image analysis. Results The overall repeat rate on intraoral periapical images was 34.4%. A total of 1,978 repeated periapical images were from endodontic assessment, which included working length estimation (WLE), trial gutta-percha (tGP), obturation, and removal of gutta-percha (rGP). In the endodontic imaging, the highest repeat rate was from WLE (51.9%) followed by tGP (48.5%), obturation (42.2%), and rGP (35.6%). In bitewing images, the repeat rate was 15.1% and poor angulation was identified as the most common cause of error. A substantial level of intra- and interobserver agreement was achieved. Conclusion The repeat rates in this study were relatively high, especially for certain clinical procedures, warranting training in optimization techniques and radiation protection. Repeat analysis should be performed from time to time to enhance quality assurance and hence deliver high-quality health services to patients. PMID:29279822

  11. A voxel based comparative analysis using magnetization transfer imaging and T1-weighted magnetic resonance imaging in progressive supranuclear palsy

    PubMed Central

    Sandhya, Mangalore; Saini, Jitender; Pasha, Shaik Afsar; Yadav, Ravi; Pal, Pramod Kumar

    2014-01-01

    Aims: In progressive supranuclear palsy (PSP) tissue damage occurs in specific cortical and subcortical regions. Voxel based analysis using T1-weighted images depict quantitative gray matter (GM) atrophy changes. Magnetization transfer (MT) imaging depicts qualitative changes in the brain parenchyma. The purpose of our study was to investigate whether MT imaging could indicate abnormalities in PSP. Settings and Design: A total of 10 patients with PSP (9 men and 1 woman) and 8 controls (5 men and 3 women) were studied with T1-weighted magnetic resonance imaging (MRI) and 3DMT imaging. Voxel based analysis of T1-weighted MRI was performed to investigate brain atrophy while MT was used to study qualitative abnormalities in the brain tissue. We used SPM8 to investigate group differences (with two sample t-test) using the GM and white matter (WM) segmented data. Results: T1-weighted imaging and MT are equally sensitive to detect changes in GM and WM in PSP. Magnetization transfer ratio images and magnetization-prepared rapid acquisition of gradient echo revealed extensive bilateral volume and qualitative changes in the orbitofrontal, prefrontal cortex and limbic lobe and sub cortical GM. The prefrontal structures involved were the rectal gyrus, medial, inferior frontal gyrus (IFG) and middle frontal gyrus (MFG). The anterior cingulate, cingulate gyrus and lingual gyrus of limbic lobe and subcortical structures such as caudate, thalamus, insula and claustrum were also involved. Cerebellar involvement mainly of anterior lobe was also noted. Conclusions: The findings suggest that voxel based MT imaging permits a whole brain unbiased investigation of central nervous system structural integrity in PSP. PMID:25024571

  12. Emotion Recognition - the need for a complete analysis of the phenomenon of expression formation

    NASA Astrophysics Data System (ADS)

    Bobkowska, Katarzyna; Przyborski, Marek; Skorupka, Dariusz

    2018-01-01

    This article shows how complex emotions are. This has been proven by the analysis of the changes that occur on the face. The authors present the problem of image analysis for the purpose of identifying emotions. In addition, they point out the importance of recording the phenomenon of the development of emotions on the human face with the use of high-speed cameras, which allows the detection of micro expression. The work that was prepared for this article was based on analyzing the parallax pair correlation coefficients for specific faces. In the article authors proposed to divide the facial image into 8 characteristic segments. With this approach, it was confirmed that at different moments of emotion the pace of expression and the maximum change characteristic of a particular emotion, for each part of the face is different.

  13. Image quality assessment of automatic three-segment MR attenuation correction vs. CT attenuation correction.

    PubMed

    Partovi, Sasan; Kohan, Andres; Gaeta, Chiara; Rubbert, Christian; Vercher-Conejero, Jose L; Jones, Robert S; O'Donnell, James K; Wojtylak, Patrick; Faulhaber, Peter

    2013-01-01

    The purpose of this study is to systematically evaluate the usefulness of Positron emission tomography/Magnetic resonance imaging (PET/MRI) images in a clinical setting by assessing the image quality of Positron emission tomography (PET) images using a three-segment MR attenuation correction (MRAC) versus the standard CT attenuation correction (CTAC). We prospectively studied 48 patients who had their clinically scheduled FDG-PET/CT followed by an FDG-PET/MRI. Three nuclear radiologists evaluated the image quality of CTAC vs. MRAC using a Likert scale (five-point scale). A two-sided, paired t-test was performed for comparison purposes. The image quality was further assessed by categorizing it as acceptable (equal to 4 and 5 on the five-point Likert scale) or unacceptable (equal to 1, 2, and 3 on the five-point Likert scale) quality using the McNemar test. When assessing the image quality using the Likert scale, one reader observed a significant difference between CTAC and MRAC (p=0.0015), whereas the other readers did not observe a difference (p=0.8924 and p=0.1880, respectively). When performing the grouping analysis, no significant difference was found between CTAC vs. MRAC for any of the readers (p=0.6137 for reader 1, p=1 for reader 2, and p=0.8137 for reader 3). All three readers more often reported artifacts on the MRAC images than on the CTAC images. There was no clinically significant difference in quality between PET images generated on a PET/MRI system and those from a Positron emission tomography/Computed tomography (PET/CT) system. PET images using the automatic three-segmented MR attenuation method provided diagnostic image quality. However, future research regarding the image quality obtained using different MR attenuation based methods is warranted before PET/MRI can be used clinically.

  14. Wavelet analysis for wind fields estimation.

    PubMed

    Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms.

  15. Sex assessment from the acetabular rim by means of image analysis.

    PubMed

    Benazzi, S; Maestri, C; Parisini, S; Vecchi, F; Gruppioni, G

    2008-08-25

    Determining sex from skeletal remains is one of the most important steps in archaeological and forensic anthropology. The present study considers the diagnostic value of the acetabulum based on its planar image and related metric data. For this purpose, 83 adult os coxae of known age were examined. Digital photos of the acetabular area were taken, with each bone in a standardized orientation. Technical drawing software was used to trace the acetabular rim and to measure the related dimensions (area, perimeter, longitudinal and transverse maximum width). The measurements were subjected to SPSS discriminant and classification function analysis. There were significant differences (p

  16. Towards Automatic Image Segmentation Using Optimised Region Growing Technique

    NASA Astrophysics Data System (ADS)

    Alazab, Mamoun; Islam, Mofakharul; Venkatraman, Sitalakshmi

    Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.

  17. Clinical evaluation of watermarked medical images.

    PubMed

    Zain, Jasni M; Fauzi, Abdul M; Aziz, Azian A

    2006-01-01

    Digital watermarking medical images provides security to the images. The purpose of this study was to see whether digitally watermarked images changed clinical diagnoses when assessed by radiologists. We embedded 256 bits watermark to various medical images in the region of non-interest (RONI) and 480K bits in both region of interest (ROI) and RONI. Our results showed that watermarking medical images did not alter clinical diagnoses. In addition, there was no difference in image quality when visually assessed by the medical radiologists. We therefore concluded that digital watermarking medical images were safe in terms of preserving image quality for clinical purposes.

  18. The Role and Image of Midwives in Caribbean Society from the Colonial Period to the Present: A Critical Analysis of the Discourse Relevant to Midwifery in Specific Hispanophone, Anglophone, and Francophone Contexts

    ERIC Educational Resources Information Center

    Crespo-Valedon, Damarys T.

    2017-01-01

    The dominant discourse on midwifery has been characterized by myths that have been constructed and perpetuated through oral and written discourse. The purpose of this research is to engage in a critical analysis of that discourse, with special focus on Hispanophone, Anglophone, and Francophone contexts in the Caribbean from colonial times to the…

  19. KENNEDY SPACE CENTER, FLA. - These towers are part of one of the world’s highest performing visual film analysis systems, developed to review and analyze previous shuttle flight data in preparation for the shuttle fleet’s return to flight. The system is being used today for another purpose. NASA has permitted its use in helping to analyze a film that shows a recent kidnapping in progress in Florida. Developed by NASA, United Space Alliance (USA) and Silicon Graphics Inc., the system allows multiple-person collaboration, highly detailed manipulation and evaluation of specific imagery. The system is housed in the Image Analysis Facility inside the Vehicle Assembly Building. [Photo taken Aug. 15, 2003, courtesy of Terry Wallace, SGI

    NASA Image and Video Library

    2004-02-04

    KENNEDY SPACE CENTER, FLA. - These towers are part of one of the world’s highest performing visual film analysis systems, developed to review and analyze previous shuttle flight data in preparation for the shuttle fleet’s return to flight. The system is being used today for another purpose. NASA has permitted its use in helping to analyze a film that shows a recent kidnapping in progress in Florida. Developed by NASA, United Space Alliance (USA) and Silicon Graphics Inc., the system allows multiple-person collaboration, highly detailed manipulation and evaluation of specific imagery. The system is housed in the Image Analysis Facility inside the Vehicle Assembly Building. [Photo taken Aug. 15, 2003, courtesy of Terry Wallace, SGI

  20. Reinventing Image Detective: An Evidence-Based Approach to Citizen Science Online

    NASA Astrophysics Data System (ADS)

    Romano, C.; Graff, P. V.; Runco, S.

    2017-12-01

    Usability studies demonstrate that web users are notoriously impatient, spending as little as 15 seconds on a home page. How do you get users to stay long enough to understand a citizen science project? How do you get users to complete complex citizen science tasks online?Image Detective, a citizen science project originally developed by scientists and science engagement specialists at the NASA Johnson Space center to engage the public in the analysis of images taken from space by astronauts to help enhance NASA's online database of astronaut imagery, partnered with the CosmoQuest citizen science platform to modernize, offering new and improved options for participation in Image Detective. The challenge: to create a web interface that builds users' skills and knowledge, creating engagement while learning complex concepts essential to the accurate completion of tasks. The project team turned to usability testing for an objective understanding of how users perceived Image Detective and the steps required to complete required tasks. A group of six users was recruited online for unmoderated and initial testing. The users followed a think-aloud protocol while attempting tasks, and were recorded on video and audio. The usability test examined users' perception of four broad areas: the purpose of and context for Image Detective; the steps required to successfully complete the analysis (differentiating images of Earth's surface from those showing outer space and identifying common surface features); locating the image center point on a map of Earth; and finally, naming geographic locations or natural events seen in the image.Usability test findings demonstrated that the following best practices can increase participation in Image Detective and can be applied to the successful implementation of any citizen science project:• Concise explanation of the project, its context, and its purpose;• Including a mention of the funding agency (in this case, NASA);• A preview of the specific tasks required of participants;• A dedicated user interface for the actual citizen science interaction.In addition, testing revealed that users may require additional context when a task is complex, difficult, or unusual (locating a specific image and its center point on a map of Earth). Video evidence will be made available with this presentation.

  1. Reinventing Image Detective: An Evidence-Based Approach to Citizen Science Online

    NASA Technical Reports Server (NTRS)

    Romano, Cia; Graff, Paige V.; Runco, Susan

    2017-01-01

    Usability studies demonstrate that web users are notoriously impatient, spending as little as 15 seconds on a home page. How do you get users to stay long enough to understand a citizen science project? How do you get users to complete complex citizen science tasks online? Image Detective, a citizen science project originally developed by scientists and science engagement specialists at the NASA Johnson Space center to engage the public in the analysis of images taken from space by astronauts to help enhance NASA's online database of astronaut imagery, partnered with the CosmoQuest citizen science platform to modernize, offering new and improved options for participation in Image Detective. The challenge: to create a web interface that builds users' skills and knowledge, creating engagement while learning complex concepts essential to the accurate completion of tasks. The project team turned to usability testing for an objective understanding of how users perceived Image Detective and the steps required to complete required tasks. A group of six users was recruited online for unmoderated and initial testing. The users followed a think-aloud protocol while attempting tasks, and were recorded on video and audio. The usability test examined users' perception of four broad areas: the purpose of and context for Image Detective; the steps required to successfully complete the analysis (differentiating images of Earth's surface from those showing outer space and identifying common surface features); locating the image center point on a map of Earth; and finally, naming geographic locations or natural events seen in the image. Usability test findings demonstrated that the following best practices can increase participation in Image Detective and can be applied to the successful implementation of any citizen science project: (1) Concise explanation of the project, its context, and its purpose; (2) Including a mention of the funding agency (in this case, NASA); (3) A preview of the specific tasks required of participants; (4) A dedicated user interface for the actual citizen science interaction. In addition, testing revealed that users may require additional context when a task is complex, difficult, or unusual (locating a specific image and its center point on a map of Earth). Video evidence will be made available with this presentation.

  2. Integrated spectral and image analysis of hyperspectral scattering data for prediction of apple fruit firmness and soluble solids content

    USDA-ARS?s Scientific Manuscript database

    Spectral scattering is useful for assessing the firmness and soluble solids content (SSC) of apples. In previous research, mean reflectance extracted from the hyperspectral scattering profiles was used for this purpose since the method is simple and fast and also gives relatively good predictions. T...

  3. The Impact of Visuals: Using a Poster To Present Metaphor.

    ERIC Educational Resources Information Center

    Riejos, Ana M. Roldan; Mansilla, Paloma, Ubeda; Castillejos, Ana M. Martin

    2001-01-01

    Values the use of a contextualized poster in which the images are more important than the actual words. Confirms this through an analysis of a questionnaire handed out to a sample of 'English for Specific Purposes' students. Addresses the pedagogical implications that the use of a poster has proved to have with a multidisciplinary group of…

  4. Study of pipe thickness loss using a neutron radiography method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohamed, Abdul Aziz; Wahab, Aliff Amiru Bin; Yazid, Hafizal B.

    2014-02-12

    The purpose of this preliminary work is to study for thickness changes in objects using neutron radiography. In doing the project, the technique for the radiography was studied. The experiment was done at NUR-2 facility at TRIGA research reactor in Malaysian Nuclear Agency, Malaysia. Test samples of varying materials were used in this project. The samples were radiographed using direct technique. Radiographic images were recorded using Nitrocellulose film. The films obtained were digitized to processed and analyzed. Digital processing is done on the images using software Isee!. The images were processed to produce better image for analysis. The thickness changesmore » in the image were measured to be compared with real thickness of the objects. From the data collected, percentages difference between measured and real thickness are below than 2%. This is considerably very low variation from original values. Therefore, verifying the neutron radiography technique used in this project.« less

  5. Shape-Constrained Segmentation Approach for Arctic Multiyear Sea Ice Floe Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Brucker, Ludovic; Ivanoff, Alvaro; Tilton, James C.

    2013-01-01

    The melting of sea ice is correlated to increases in sea surface temperature and associated climatic changes. Therefore, it is important to investigate how rapidly sea ice floes melt. For this purpose, a new Tempo Seg method for multi temporal segmentation of multi year ice floes is proposed. The microwave radiometer is used to track the position of an ice floe. Then,a time series of MODIS images are created with the ice floe in the image center. A Tempo Seg method is performed to segment these images into two regions: Floe and Background.First, morphological feature extraction is applied. Then, the central image pixel is marked as Floe, and shape-constrained best merge region growing is performed. The resulting tworegionmap is post-filtered by applying morphological operators.We have successfully tested our method on a set of MODIS images and estimated the area of a sea ice floe as afunction of time.

  6. D3D augmented reality imaging system: proof of concept in mammography.

    PubMed

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called "depth 3-dimensional (D3D) augmented reality". A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice.

  7. Image processing and 3D visualization in forensic pathologic examination

    NASA Astrophysics Data System (ADS)

    Oliver, William R.; Altschuler, Bruce R.

    1996-02-01

    The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing and three-dimensional visualization in the analysis of patterned injuries and tissue damage. While image processing will never replace classical understanding and interpretation of how injuries develop and evolve, it can be a useful tool in helping an observer notice features in an image, may help provide correlation of surface to deep tissue injury, and provide a mechanism for the development of a metric for analyzing how likely it may be that a given object may have caused a given wound. We are also exploring methods of acquiring three-dimensional data for such measurements, which is the subject of a second paper.

  8. Use of laser range finders and range image analysis in automated assembly tasks

    NASA Technical Reports Server (NTRS)

    Alvertos, Nicolas; Dcunha, Ivan

    1990-01-01

    A proposition to study the effect of filtering processes on range images and to evaluate the performance of two different laser range mappers is made. Median filtering was utilized to remove noise from the range images. First and second order derivatives are then utilized to locate the similarities and dissimilarities between the processed and the original images. Range depth information is converted into spatial coordinates, and a set of coefficients which describe 3-D objects is generated using the algorithm developed in the second phase of this research. Range images of spheres and cylinders are used for experimental purposes. An algorithm was developed to compare the performance of two different laser range mappers based upon the range depth information of surfaces generated by each of the mappers. Furthermore, an approach based on 2-D analytic geometry is also proposed which serves as a basis for the recognition of regular 3-D geometric objects.

  9. Hippocampus shape analysis for temporal lobe epilepsy detection in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Kohan, Zohreh; Azmi, Reza

    2016-03-01

    There are evidences in the literature that Temporal Lobe Epilepsy (TLE) causes some lateralized atrophy and deformation on hippocampus and other substructures of the brain. Magnetic Resonance Imaging (MRI), due to high-contrast soft tissue imaging, is one of the most popular imaging modalities being used in TLE diagnosis and treatment procedures. Using an algorithm to help clinicians for better and more effective shape deformations analysis could improve the diagnosis and treatment of the disease. In this project our purpose is to design, implement and test a classification algorithm for MRIs based on hippocampal asymmetry detection using shape and size-based features. Our method consisted of two main parts; (1) shape feature extraction, and (2) image classification. We tested 11 different shape and size features and selected four of them that detect the asymmetry in hippocampus significantly in a randomly selected subset of the dataset. Then, we employed a support vector machine (SVM) classifier to classify the remaining images of the dataset to normal and epileptic images using our selected features. The dataset contains 25 patient images in which 12 cases were used as a training set and the rest 13 cases for testing the performance of classifier. We measured accuracy, specificity and sensitivity of, respectively, 76%, 100%, and 70% for our algorithm. The preliminary results show that using shape and size features for detecting hippocampal asymmetry could be helpful in TLE diagnosis in MRI.

  10. Successful implementation of image-guided radiation therapy quality assurance in the Trans Tasman Radiation Oncology Group 08.01 PROFIT Study.

    PubMed

    Middleton, Mark; Frantzis, Jim; Healy, Brendan; Jones, Mark; Murry, Rebecca; Kron, Tomas; Plank, Ashley; Catton, Charles; Martin, Jarad

    2011-12-01

    The quality assurance (QA) of image-guided radiation therapy (IGRT) within clinical trials is in its infancy, but its importance will continue to grow as IGRT becomes the standard of care. The purpose of this study was to demonstrate the feasibility of IGRT QA as part of the credentialing process for a clinical trial. As part of the accreditation process for a randomized trial in prostate cancer hypofraction, IGRT benchmarking across multiple sites was incorporated. Each participating site underwent IGRT credentialing via a site visit. In all centers, intraprostatic fiducials were used. A real-time assessment of analysis of IGRT was performed using Varian's Offline Review image analysis package. Two-dimensional (2D) kV and MV electronic portal imaging prostate patient datasets were used, consisting of 39 treatment verification images for 2D/2D comparison with the digitally reconstructed radiograph derived from the planning scan. The influence of differing sites, image modality, and observer experience on IGRT was then assessed. Statistical analysis of the mean mismatch errors showed that IGRT analysis was performed uniformly regardless of institution, therapist seniority, or imaging modality across the three orthogonal planes. The IGRT component of clinical trials that include sophisticated planning and treatment protocols must undergo stringent QA. The IGRT technique of intraprostatic fiducials has been shown in the context of this trial to be undertaken in a uniform manner across Australia. Extending this concept to many sites with different equipment and IGRT experience will require a robust remote credentialing process. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.

  11. Magnetic particle imaging for in vivo blood flow velocity measurements in mice

    NASA Astrophysics Data System (ADS)

    Kaul, Michael G.; Salamon, Johannes; Knopp, Tobias; Ittrich, Harald; Adam, Gerhard; Weller, Horst; Jung, Caroline

    2018-03-01

    Magnetic particle imaging (MPI) is a new imaging technology. It is a potential candidate to be used for angiographic purposes, to study perfusion and cell migration. The aim of this work was to measure velocities of the flowing blood in the inferior vena cava of mice, using MPI, and to evaluate it in comparison with magnetic resonance imaging (MRI). A phantom mimicking the flow within the inferior vena cava with velocities of up to 21 cm s‑1 was used for the evaluation of the applied analysis techniques. Time–density and distance–density analyses for bolus tracking were performed to calculate flow velocities. These findings were compared with the calibrated velocities set by a flow pump, and it can be concluded that velocities of up to 21 cm s‑1 can be measured by MPI. A time–density analysis using an arrival time estimation algorithm showed the best agreement with the preset velocities. In vivo measurements were performed in healthy FVB mice (n  =  10). MRI experiments were performed using phase contrast (PC) for velocity mapping. For MPI measurements, a standardized injection of a superparamagnetic iron oxide tracer was applied. In vivo MPI data were evaluated by a time–density analysis and compared to PC MRI. A Bland–Altman analysis revealed good agreement between the in vivo velocities acquired by MRI of 4.0  ±  1.5 cm s‑1 and those measured by MPI of 4.8  ±  1.1 cm s‑1. Magnetic particle imaging is a new tool with which to measure and quantify flow velocities. It is fast, radiation-free, and produces 3D images. It therefore offers the potential for vascular imaging.

  12. Relationships among muscle dysmorphia characteristics, body image quality of life, and coping in males.

    PubMed

    Tod, D; Edwards, C

    2015-09-01

    The purpose of this study was to examine relationships among bodybuilding dependence, muscle satisfaction, body image-related quality of life and body image-related coping strategies, and test the hypothesis that muscle dysmorphia characteristics may predict quality of life via coping strategies. Participants (294 males, Mage=20.5 years, SD=3.1) participated in a cross-sectional survey. Participants completed questionnaires assessing muscle satisfaction, bodybuilding dependence, body image-related quality of life and body image-related coping. Quality of life was correlated positively with muscle satisfaction and bodybuilding dependence but negatively with body image coping (P<0.05). Body image coping was correlated positively with bodybuilding dependence and negatively with muscle satisfaction (P<0.05). Mediation analysis found that bodybuilding dependence and muscle satisfaction predicted quality of life both directly and indirectly via body image coping strategies (as evidenced by the bias corrected and accelerated bootstrapped confidence intervals). These results provide preliminary evidence regarding the ways that muscularity concerns might influence body image-related quality of life. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  13. Bi-temporal analysis of landscape changes in the easternmost mediterranean deltas using binary and classified change information.

    PubMed

    Alphan, Hakan

    2013-03-01

    The aim of this study is (1) to quantify landscape changes in the easternmost Mediterranean deltas using bi-temporal binary change detection approach and (2) to analyze relationships between conservation/management designations and various categories of change that indicate type, degree and severity of human impact. For this purpose, image differencing and ratioing were applied to Landsat TM images of 1984 and 2006. A total of 136 candidate change images including normalized difference vegetation index (NDVI) and principal component analysis (PCA) difference images were tested to understand performance of bi-temporal pre-classification analysis procedures in the Mediterranean delta ecosystems. Results showed that visible image algebra provided high accuracies than did NDVI and PCA differencing. On the other hand, Band 5 differencing had one of the lowest change detection performances. Seven superclasses of change were identified using from/to change categories between the earlier and later dates. These classes were used to understand spatial character of anthropogenic impacts in the study area and derive qualitative and quantitative change information within and outside of the conservation/management areas. Change analysis indicated that natural site and wildlife reserve designations fell short of protecting sand dunes from agricultural expansion in the west. East of the study area, however, was exposed to least human impact owing to the fact that nature conservation status kept human interference at a minimum. Implications of these changes were discussed and solutions were proposed to deal with management problems leading to environmental change.

  14. SU-F-J-94: Development of a Plug-in Based Image Analysis Tool for Integration Into Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, D; Anderson, C; Mayo, C

    Purpose: To extend the functionality of a commercial treatment planning system (TPS) to support (i) direct use of quantitative image-based metrics within treatment plan optimization and (ii) evaluation of dose-functional volume relationships to assist in functional image adaptive radiotherapy. Methods: A script was written that interfaces with a commercial TPS via an Application Programming Interface (API). The script executes a program that performs dose-functional volume analyses. Written in C#, the script reads the dose grid and correlates it with image data on a voxel-by-voxel basis through API extensions that can access registration transforms. A user interface was designed through WinFormsmore » to input parameters and display results. To test the performance of this program, image- and dose-based metrics computed from perfusion SPECT images aligned to the treatment planning CT were generated, validated, and compared. Results: The integration of image analysis information was successfully implemented as a plug-in to a commercial TPS. Perfusion SPECT images were used to validate the calculation and display of image-based metrics as well as dose-intensity metrics and histograms for defined structures on the treatment planning CT. Various biological dose correction models, custom image-based metrics, dose-intensity computations, and dose-intensity histograms were applied to analyze the image-dose profile. Conclusion: It is possible to add image analysis features to commercial TPSs through custom scripting applications. A tool was developed to enable the evaluation of image-intensity-based metrics in the context of functional targeting and avoidance. In addition to providing dose-intensity metrics and histograms that can be easily extracted from a plan database and correlated with outcomes, the system can also be extended to a plug-in optimization system, which can directly use the computed metrics for optimization of post-treatment tumor or normal tissue response models. Supported by NIH - P01 - CA059827.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, J; Gong, G; Cui, Y

    Purpose: To predict early pathological response of breast cancer to neoadjuvant chemotherapy (NAC) based on quantitative, multi-region analysis of dynamic contrast enhancement magnetic resonance imaging (DCE-MRI). Methods: In this institution review board-approved study, 35 patients diagnosed with stage II/III breast cancer were retrospectively investigated using DCE-MR images acquired before and after the first cycle of NAC. First, principal component analysis (PCA) was used to reduce the dimensionality of the DCE-MRI data with a high-temporal resolution. We then partitioned the whole tumor into multiple subregions using k-means clustering based on the PCA-defined eigenmaps. Within each tumor subregion, we extracted four quantitativemore » Haralick texture features based on the gray-level co-occurrence matrix (GLCM). The change in texture features in each tumor subregion between pre- and during-NAC was used to predict pathological complete response after NAC. Results: Three tumor subregions were identified through clustering, each with distinct enhancement characteristics. In univariate analysis, all imaging predictors except one extracted from the tumor subregion associated with fast wash-out were statistically significant (p< 0.05) after correcting for multiple testing, with area under the ROC curve or AUCs between 0.75 and 0.80. In multivariate analysis, the proposed imaging predictors achieved an AUC of 0.79 (p = 0.002) in leave-one-out cross validation. This improved upon conventional imaging predictors such as tumor volume (AUC=0.53) and texture features based on whole-tumor analysis (AUC=0.65). Conclusion: The heterogeneity of the tumor subregion associated with fast wash-out on DCE-MRI predicted early pathological response to neoadjuvant chemotherapy in breast cancer.« less

  16. Automated texture-based identification of ovarian cancer in confocal microendoscope images

    NASA Astrophysics Data System (ADS)

    Srivastava, Saurabh; Rodriguez, Jeffrey J.; Rouse, Andrew R.; Brewer, Molly A.; Gmitro, Arthur F.

    2005-03-01

    The fluorescence confocal microendoscope provides high-resolution, in-vivo imaging of cellular pathology during optical biopsy. There are indications that the examination of human ovaries with this instrument has diagnostic implications for the early detection of ovarian cancer. The purpose of this study was to develop a computer-aided system to facilitate the identification of ovarian cancer from digital images captured with the confocal microendoscope system. To achieve this goal, we modeled the cellular-level structure present in these images as texture and extracted features based on first-order statistics, spatial gray-level dependence matrices, and spatial-frequency content. Selection of the best features for classification was performed using traditional feature selection techniques including stepwise discriminant analysis, forward sequential search, a non-parametric method, principal component analysis, and a heuristic technique that combines the results of these methods. The best set of features selected was used for classification, and performance of various machine classifiers was compared by analyzing the areas under their receiver operating characteristic curves. The results show that it is possible to automatically identify patients with ovarian cancer based on texture features extracted from confocal microendoscope images and that the machine performance is superior to that of the human observer.

  17. NEFI: Network Extraction From Images

    PubMed Central

    Dirnberger, M.; Kehl, T.; Neumann, A.

    2015-01-01

    Networks are amongst the central building blocks of many systems. Given a graph of a network, methods from graph theory enable a precise investigation of its properties. Software for the analysis of graphs is widely available and has been applied to study various types of networks. In some applications, graph acquisition is relatively simple. However, for many networks data collection relies on images where graph extraction requires domain-specific solutions. Here we introduce NEFI, a tool that extracts graphs from images of networks originating in various domains. Regarding previous work on graph extraction, theoretical results are fully accessible only to an expert audience and ready-to-use implementations for non-experts are rarely available or insufficiently documented. NEFI provides a novel platform allowing practitioners to easily extract graphs from images by combining basic tools from image processing, computer vision and graph theory. Thus, NEFI constitutes an alternative to tedious manual graph extraction and special purpose tools. We anticipate NEFI to enable time-efficient collection of large datasets. The analysis of these novel datasets may open up the possibility to gain new insights into the structure and function of various networks. NEFI is open source and available at http://nefi.mpi-inf.mpg.de. PMID:26521675

  18. The readings of smoking fathers: a reception analysis of tobacco cessation images.

    PubMed

    Johnson, Joy L; Oliffe, John L; Kelly, Mary T; Bottorff, Joan L; LeBeau, Karen

    2009-09-01

    The purpose of this qualitative study was to examine how new fathers decode image-based anti-smoking messages and uncover the extent to which ideals of masculinity might influence men to take up and/or disregard smoking cessation messages. The authors analyzed 5 images that had been used to promote smoking cessation and arrived at a consensus about the dominant discourse encoded by each image. During face-to-face interviews, new fathers were invited to discuss the images; these interview data were coded and analyzed using a social constructionist gender analysis. The study findings highlight how most men negotiated or opposed dominant discourses of health that communicated the dangers of smoking by reproducing dominant ideals of masculinity, including explicit disregard for self-health. They accepted dominant social discourses of fathering that reproduced traditional notions of masculinity, such as the protector and provider. The authors conclude that tobacco interventions targeted to new fathers must (a) develop more awareness of the ability of audiences to select discourses that empower their own interpretive positioning with regard to media, and (b) deconstruct and engage with context and age-specific masculine ideals to avoid providing rationales for continued tobacco use.

  19. Prompt gamma ray imaging for verification of proton boron fusion therapy: A Monte Carlo study.

    PubMed

    Shin, Han-Back; Yoon, Do-Kun; Jung, Joo-Young; Kim, Moo-Sub; Suh, Tae Suk

    2016-10-01

    The purpose of this study was to verify acquisition feasibility of a single photon emission computed tomography image using prompt gamma rays for proton boron fusion therapy (PBFT) and to confirm an enhanced therapeutic effect of PBFT by comparison with conventional proton therapy without use of boron. Monte Carlo simulation was performed to acquire reconstructed image during PBFT. We acquired percentage depth dose (PDD) of the proton beams in a water phantom, energy spectrum of the prompt gamma rays, and tomographic images, including the boron uptake region (BUR; target). The prompt gamma ray image was reconstructed using maximum likelihood expectation maximisation (MLEM) with 64 projection raw data. To verify the reconstructed image, both an image profile and contrast analysis according to the iteration number were conducted. In addition, the physical distance between two BURs in the region of interest of each BUR was measured. The PDD of the proton beam from the water phantom including the BURs shows more efficient than that of conventional proton therapy on tumour region. A 719keV prompt gamma ray peak was clearly observed in the prompt gamma ray energy spectrum. The prompt gamma ray image was reconstructed successfully using 64 projections. Different image profiles including two BURs were acquired from the reconstructed image according to the iteration number. We confirmed successful acquisition of a prompt gamma ray image during PBFT. In addition, the quantitative image analysis results showed relatively good performance for further study. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Spherical grating based x-ray Talbot interferometry

    PubMed Central

    Cong, Wenxiang; Xi, Yan; Wang, Ge

    2015-01-01

    Purpose: Grating interferometry is a state-of-the-art x-ray imaging approach, which can acquire information on x-ray attenuation, phase shift, and small-angle scattering simultaneously. Phase-contrast imaging and dark-field imaging are very sensitive to microstructural variation and offers superior contrast resolution for biological soft tissues. However, a common x-ray tube is a point-like source. As a result, the popular planar grating imaging configuration seriously restricts the flux of photons and decreases the visibility of signals, yielding a limited field of view. The purpose of this study is to extend the planar x-ray grating imaging theory and methods to a spherical grating scheme for a wider range of preclinical and clinical applications. Methods: A spherical grating matches the wave front of a point x-ray source very well, allowing the perpendicular incidence of x-rays on the grating to achieve a higher visibility over a larger field of view than the planer grating counterpart. A theoretical analysis of the Talbot effect for spherical grating imaging is proposed to establish a basic foundation for x-ray spherical gratings interferometry. An efficient method of spherical grating imaging is also presented to extract attenuation, differential phase, and dark-field images in the x-ray spherical grating interferometer. Results: Talbot self-imaging with spherical gratings is analyzed based on the Rayleigh–Sommerfeld diffraction formula, featuring a periodic angular distribution in a polar coordinate system. The Talbot distance is derived to reveal the Talbot self-imaging pattern. Numerical simulation results show the self-imaging phenomenon of a spherical grating interferometer, which is in agreement with the theoretical prediction. Conclusions: X-ray Talbot interferometry with spherical gratings has a significant practical promise. Relative to planar grating imaging, spherical grating based x-ray Talbot interferometry has a larger field of view and improves both signal visibility and dose utilization for pre-clinical and clinical applications. PMID:26520741

  1. Spherical grating based x-ray Talbot interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cong, Wenxiang, E-mail: congw@rpi.edu, E-mail: xiy2@rpi.edu, E-mail: wangg6@rpi.edu; Xi, Yan, E-mail: congw@rpi.edu, E-mail: xiy2@rpi.edu, E-mail: wangg6@rpi.edu; Wang, Ge, E-mail: congw@rpi.edu, E-mail: xiy2@rpi.edu, E-mail: wangg6@rpi.edu

    2015-11-15

    Purpose: Grating interferometry is a state-of-the-art x-ray imaging approach, which can acquire information on x-ray attenuation, phase shift, and small-angle scattering simultaneously. Phase-contrast imaging and dark-field imaging are very sensitive to microstructural variation and offers superior contrast resolution for biological soft tissues. However, a common x-ray tube is a point-like source. As a result, the popular planar grating imaging configuration seriously restricts the flux of photons and decreases the visibility of signals, yielding a limited field of view. The purpose of this study is to extend the planar x-ray grating imaging theory and methods to a spherical grating scheme formore » a wider range of preclinical and clinical applications. Methods: A spherical grating matches the wave front of a point x-ray source very well, allowing the perpendicular incidence of x-rays on the grating to achieve a higher visibility over a larger field of view than the planer grating counterpart. A theoretical analysis of the Talbot effect for spherical grating imaging is proposed to establish a basic foundation for x-ray spherical gratings interferometry. An efficient method of spherical grating imaging is also presented to extract attenuation, differential phase, and dark-field images in the x-ray spherical grating interferometer. Results: Talbot self-imaging with spherical gratings is analyzed based on the Rayleigh–Sommerfeld diffraction formula, featuring a periodic angular distribution in a polar coordinate system. The Talbot distance is derived to reveal the Talbot self-imaging pattern. Numerical simulation results show the self-imaging phenomenon of a spherical grating interferometer, which is in agreement with the theoretical prediction. Conclusions: X-ray Talbot interferometry with spherical gratings has a significant practical promise. Relative to planar grating imaging, spherical grating based x-ray Talbot interferometry has a larger field of view and improves both signal visibility and dose utilization for pre-clinical and clinical applications.« less

  2. The application of profile imaging for monitoring organic and metal pollution in the Venice lagoon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bona, F.; Maffiotti, A.

    1995-12-31

    Since 1993 the technique of Sediment Profile Imaging (SPI) has been applied in monitoring the Venice Lagoon. The purposes of the monitoring were several, ranging from an initial baseline survey of sediment quality, to the control of Ulva rigida proliferation, to sediment quality assessment for dredging and capping activities in restricted areas of the lagoon. Data resulting from each computer image analysis have been summarized in one index which takes into consideration the mutual interactions between the physical and chemical conditions and the benthic community. In this way a spatial and seasonal gradient in the quality Venice Lagoon sediments hasmore » been established and the key roles of the organic enrichment and of the ecosystem hydrodynamics have been confirmed. The underwater camera and image analysis have also been an effective screening tool to address further investigations in those areas of particular concern for sediment contamination. On the basis of the SPI indices a selection of stations has been made in order to sample and perform sediment toxicity tests and chemical analyses to assess contamination levels.« less

  3. Multispectral image restoration of historical documents based on LAAMs and mathematical morphology

    NASA Astrophysics Data System (ADS)

    Lechuga-S., Edwin; Valdiviezo-N., Juan C.; Urcid, Gonzalo

    2014-09-01

    This research introduces an automatic technique designed for the digital restoration of the damaged parts in historical documents. For this purpose an imaging spectrometer is used to acquire a set of images in the wavelength interval from 400 to 1000 nm. Assuming the presence of linearly mixed spectral pixels registered from the multispectral image, our technique uses two lattice autoassociative memories to extract the set of pure pigments conforming a given document. Through an spectral unmixing analysis, our method produces fractional abundance maps indicating the distributions of each pigment in the scene. These maps are then used to locate cracks and holes in the document under study. The restoration process is performed by the application of a region filling algorithm, based on morphological dilation, followed by a color interpolation to restore the original appearance of the filled areas. This procedure has been successfully applied to the analysis and restoration of three multispectral data sets: two corresponding to artificially superimposed scripts and a real data acquired from a Mexican pre-Hispanic codex, whose restoration results are presented.

  4. Optimization of T2-weighted imaging for shoulder magnetic resonance arthrography by synthetic magnetic resonance imaging.

    PubMed

    Lee, Seung Hyun; Lee, Young Han; Hahn, Seok; Yang, Jaemoon; Song, Ho-Taek; Suh, Jin-Suck

    2017-01-01

    Background Synthetic magnetic resonance imaging (MRI) allows reformatting of various synthetic images by adjustment of scanning parameters such as repetition time (TR) and echo time (TE). Optimized MR images can be reformatted from T1, T2, and proton density (PD) values to achieve maximum tissue contrast between joint fluid and adjacent soft tissue. Purpose To demonstrate the method for optimization of TR and TE by synthetic MRI and to validate the optimized images by comparison with conventional shoulder MR arthrography (MRA) images. Material and Methods Thirty-seven shoulder MRA images acquired by synthetic MRI were retrospectively evaluated for PD, T1, and T2 values at the joint fluid and glenoid labrum. Differences in signal intensity between the fluid and labrum were observed between TR of 500-6000 ms and TE of 80-300 ms in T2-weighted (T2W) images. Conventional T2W and synthetic images were analyzed for diagnostic agreement of supraspinatus tendon abnormalities (kappa statistics) and image quality scores (one-way analysis of variance with post-hoc analysis). Results Optimized mean values of TR and TE were 2724.7 ± 1634.7 and 80.1 ± 0.4, respectively. Diagnostic agreement for supraspinatus tendon abnormalities between conventional and synthetic MR images was excellent (κ = 0.882). The mean image quality score of the joint space in optimized synthetic images was significantly higher compared with those in conventional and synthetic images (2.861 ± 0.351 vs. 2.556 ± 0.607 vs. 2.750 ± 0.439; P < 0.05). Conclusion Synthetic MRI with optimized TR and TE for shoulder MRA enables optimization of soft-tissue contrast.

  5. Fundamental limits of image registration performance: Effects of image noise and resolution in CT-guided interventions.

    PubMed

    Ketcha, M D; de Silva, T; Han, R; Uneri, A; Goerres, J; Jacobson, M; Vogt, S; Kleinszig, G; Siewerdsen, J H

    2017-02-11

    In image-guided procedures, image acquisition is often performed primarily for the task of geometrically registering information from another image dataset, rather than detection / visualization of a particular feature. While the ability to detect a particular feature in an image has been studied extensively with respect to image quality characteristics (noise, resolution) and is an ongoing, active area of research, comparatively little has been accomplished to relate such image quality characteristics to registration performance. To establish such a framework, we derived Cramer-Rao lower bounds (CRLB) for registration accuracy, revealing the underlying dependencies on image variance and gradient strength. The CRLB was analyzed as a function of image quality factors (in particular, dose) for various similarity metrics and compared to registration accuracy using CT images of an anthropomorphic head phantom at various simulated dose levels. Performance was evaluated in terms of root mean square error (RMSE) of the registration parameters. Analysis of the CRLB shows two primary dependencies: 1) noise variance (related to dose); and 2) sum of squared image gradients (related to spatial resolution and image content). Comparison of the measured RMSE to the CRLB showed that the best registration method, RMSE achieved the CRLB to within an efficiency factor of 0.21, and optimal estimators followed the predicted inverse proportionality between registration performance and radiation dose. Analysis of the CRLB for image registration is an important step toward understanding and evaluating an intraoperative imaging system with respect to a registration task. While the CRLB is optimistic in absolute performance, it reveals a basis for relating the performance of registration estimators as a function of noise content and may be used to guide acquisition parameter selection (e.g., dose) for purposes of intraoperative registration.

  6. Automatic analysis of the micronucleus test in primary human lymphocytes using image analysis.

    PubMed

    Frieauff, W; Martus, H J; Suter, W; Elhajouji, A

    2013-01-01

    The in vitro micronucleus test (MNT) is a well-established test for early screening of new chemical entities in industrial toxicology. For assessing the clastogenic or aneugenic potential of a test compound, micronucleus induction in cells has been shown repeatedly to be a sensitive and a specific parameter. Various automated systems to replace the tedious and time-consuming visual slide analysis procedure as well as flow cytometric approaches have been discussed. The ROBIAS (Robotic Image Analysis System) for both automatic cytotoxicity assessment and micronucleus detection in human lymphocytes was developed at Novartis where the assay has been used to validate positive results obtained in the MNT in TK6 cells, which serves as the primary screening system for genotoxicity profiling in early drug development. In addition, the in vitro MNT has become an accepted alternative to support clinical studies and will be used for regulatory purposes as well. The comparison of visual with automatic analysis results showed a high degree of concordance for 25 independent experiments conducted for the profiling of 12 compounds. For concentration series of cyclophosphamide and carbendazim, a very good correlation between automatic and visual analysis by two examiners could be established, both for the relative division index used as cytotoxicity parameter, as well as for micronuclei scoring in mono- and binucleated cells. Generally, false-positive micronucleus decisions could be controlled by fast and simple relocation of the automatically detected patterns. The possibility to analyse 24 slides within 65h by automatic analysis over the weekend and the high reproducibility of the results make automatic image processing a powerful tool for the micronucleus analysis in primary human lymphocytes. The automated slide analysis for the MNT in human lymphocytes complements the portfolio of image analysis applications on ROBIAS which is supporting various assays at Novartis.

  7. Use of local noise power spectrum and wavelet analysis in quantitative image quality assurance for EPIDs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Soyoung

    Purpose: To investigate the use of local noise power spectrum (NPS) to characterize image noise and wavelet analysis to isolate defective pixels and inter-subpanel flat-fielding artifacts for quantitative quality assurance (QA) of electronic portal imaging devices (EPIDs). Methods: A total of 93 image sets including custom-made bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Global quantitative metrics such as modulation transform function (MTF), NPS, and detective quantum efficiency (DQE) were computed for each image set. Local NPS was also calculated for individual subpanels by sampling region of interests within each subpanelmore » of the EPID. The 1D NPS, obtained by radially averaging the 2D NPS, was fitted to a power-law function. The r-square value of the linear regression analysis was used as a singular metric to characterize the noise properties of individual subpanels of the EPID. The sensitivity of the local NPS was first compared with the global quantitative metrics using historical image sets. It was then compared with two commonly used commercial QA systems with images collected after applying two different EPID calibration methods (single-level gain and multilevel gain). To detect isolated defective pixels and inter-subpanel flat-fielding artifacts, Haar wavelet transform was applied on the images. Results: Global quantitative metrics including MTF, NPS, and DQE showed little change over the period of data collection. On the contrary, a strong correlation between the local NPS (r-square values) and the variation of the EPID noise condition was observed. The local NPS analysis indicated image quality improvement with the r-square values increased from 0.80 ± 0.03 (before calibration) to 0.85 ± 0.03 (after single-level gain calibration) and to 0.96 ± 0.03 (after multilevel gain calibration), while the commercial QA systems failed to distinguish the image quality improvement between the two calibration methods. With wavelet analysis, defective pixels and inter-subpanel flat-fielding artifacts were clearly identified as spikes after thresholding the inversely transformed images. Conclusions: The proposed local NPS (r-square values) showed superior sensitivity to the noise level variations of individual subpanels compared with global quantitative metrics such as MTF, NPS, and DQE. Wavelet analysis was effective in detecting isolated defective pixels and inter-subpanel flat-fielding artifacts. The proposed methods are promising for the early detection of imaging artifacts of EPIDs.« less

  8. Cone-Beam Computed Tomography (CBCT) Hepatic Arteriography in Chemoembolization for Hepatocellular Carcinoma: Performance Depicting Tumors and Tumor Feeders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, In Joon; Chung, Jin Wook, E-mail: chungjw@snu.ac.kr; Yin, Yong Hu

    2015-10-15

    PurposeThis study was designed to analyze retrospectively the performance of cone-beam computed tomography (CBCT) hepatic arteriography in depicting tumors and their feeders and to investigate the related determining factors in chemoembolization for hepatocellular carcinoma (HCC).MethodsEighty-six patients with 142 tumors satisfying the imaging diagnosis criteria of HCC were included in this study. The performance of CBCT hepatic arteriography for chemoembolization per tumor and per patient was evaluated using maximum intensity projection images alone (MIP analysis) or MIP combined with multiplanar reformation images (MIP + MPR analysis) regarding the following three aspects: tumor depiction, confidence of tumor feeder detection, and trackability of tumor feeders.more » Tumor size, tumor enhancement, tumor location, number of feeders, diaphragmatic motion, portal vein enhancement, and hepatic artery to parenchyma enhancement ratio were regarded as potential determining factors.ResultsTumors were depicted in 125 (88.0 %) and 142 tumors (100 %) on MIP and MIP + MPR analysis, respectively. Imaging performances on MIP and MIP + MPR analysis were good enough to perform subsegmental chemoembolization without additional angiographic investigation in 88 (62.0 %) and 128 tumors (90.1 %) on per-tumor basis and in 43 (50 %) and 73 (84.9 %) on per-patient basis, respectively. Significant determining factors for performance in MIP + MPR analysis on per tumor basis were tumor size (p = 0.030), tumor enhancement (0.005), tumor location (p = 0.001), and diaphragmatic motion (p < 0.001).ConclusionsCBCT hepatic arteriography provided sufficient information for subsegmental chemoembolization by depicting tumors and their feeders in the vast majority of patients. Combined analysis of MIP and MPR images was essential to enhance the performance of CBCT hepatic arteriography.« less

  9. Visual Recognition Software for Binary Classification and Its Application to Spruce Pollen Identification

    PubMed Central

    Tcheng, David K.; Nayak, Ashwin K.; Fowlkes, Charless C.; Punyasena, Surangi W.

    2016-01-01

    Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems. PMID:26867017

  10. HYPOTrace: image analysis software for measuring hypocotyl growth and shape demonstrated on Arabidopsis seedlings undergoing photomorphogenesis.

    PubMed

    Wang, Liya; Uilecan, Ioan Vlad; Assadi, Amir H; Kozmik, Christine A; Spalding, Edgar P

    2009-04-01

    Analysis of time series of images can quantify plant growth and development, including the effects of genetic mutations (phenotypes) that give information about gene function. Here is demonstrated a software application named HYPOTrace that automatically extracts growth and shape information from electronic gray-scale images of Arabidopsis (Arabidopsis thaliana) seedlings. Key to the method is the iterative application of adaptive local principal components analysis to extract a set of ordered midline points (medial axis) from images of the seedling hypocotyl. Pixel intensity is weighted to avoid the medial axis being diverted by the cotyledons in areas where the two come in contact. An intensity feature useful for terminating the midline at the hypocotyl apex was isolated in each image by subtracting the baseline with a robust local regression algorithm. Applying the algorithm to time series of images of Arabidopsis seedlings responding to light resulted in automatic quantification of hypocotyl growth rate, apical hook opening, and phototropic bending with high spatiotemporal resolution. These functions are demonstrated here on wild-type, cryptochrome1, and phototropin1 seedlings for the purpose of showing that HYPOTrace generated expected results and to show how much richer the machine-vision description is compared to methods more typical in plant biology. HYPOTrace is expected to benefit seedling development research, particularly in the photomorphogenesis field, by replacing many tedious, error-prone manual measurements with a precise, largely automated computational tool.

  11. Segmentation of White Blood Cells From Microscopic Images Using a Novel Combination of K-Means Clustering and Modified Watershed Algorithm.

    PubMed

    Ghane, Narjes; Vard, Alireza; Talebi, Ardeshir; Nematollahy, Pardis

    2017-01-01

    Recognition of white blood cells (WBCs) is the first step to diagnose some particular diseases such as acquired immune deficiency syndrome, leukemia, and other blood-related diseases that are usually done by pathologists using an optical microscope. This process is time-consuming, extremely tedious, and expensive and needs experienced experts in this field. Thus, a computer-aided diagnosis system that assists pathologists in the diagnostic process can be so effective. Segmentation of WBCs is usually a first step in developing a computer-aided diagnosis system. The main purpose of this paper is to segment WBCs from microscopic images. For this purpose, we present a novel combination of thresholding, k-means clustering, and modified watershed algorithms in three stages including (1) segmentation of WBCs from a microscopic image, (2) extraction of nuclei from cell's image, and (3) separation of overlapping cells and nuclei. The evaluation results of the proposed method show that similarity measures, precision, and sensitivity respectively were 92.07, 96.07, and 94.30% for nucleus segmentation and 92.93, 97.41, and 93.78% for cell segmentation. In addition, statistical analysis presents high similarity between manual segmentation and the results obtained by the proposed method.

  12. Computational intelligence for target assessment in Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Micheli-Tzanakou, Evangelia; Hamilton, J. L.; Zheng, J.; Lehman, Richard M.

    2001-11-01

    Recent advances in image and signal processing have created a new challenging environment for biomedical engineers. Methods that were developed for different fields are now finding a fertile ground in biomedicine, especially in the analysis of bio-signals and in the understanding of images. More and more, these methods are used in the operating room, helping surgeons, and in the physician's office as aids for diagnostic purposes. Neural Network (NN) research on the other hand, has gone a long way in the past decade. NNs now consist of many thousands of highly interconnected processing elements that can encode, store and recall relationships between different patterns by altering the weighting coefficients of inputs in a systematic way. Although they can generate reasonable outputs from unknown input patterns, and can tolerate a great deal of noise, they are very slow when run on a serial machine. We have used advanced signal processing and innovative image processing methods that are used along with computational intelligence for diagnostic purposes and as visualization aids inside and outside the operating room. Applications to be discussed include EEGs and field potentials in Parkinson's disease along with 3D reconstruction of MR or fMR brain images in Parkinson's patients, are currently used in the operating room for Pallidotomies and Deep Brain Stimulation (DBS).

  13. Good reasons to implement quality assurance in nationwide breast cancer screening programs in Croatia and Serbia: results from a pilot study.

    PubMed

    Ciraj-Bjelac, Olivera; Faj, Dario; Stimac, Damir; Kosutic, Dusko; Arandjic, Danijela; Brkic, Hrvoje

    2011-04-01

    The purpose of this study is to investigate the need for and the possible achievements of a comprehensive QA programme and to look at effects of simple corrective actions on image quality in Croatia and in Serbia. The paper focuses on activities related to the technical and radiological aspects of QA. The methodology consisted of two phases. The aim of the first phase was the initial assessment of mammography practice in terms of image quality, patient dose and equipment performance in selected number of mammography units in Croatia and Serbia. Subsequently, corrective actions were suggested and implemented. Then the same parameters were re-assessed. Most of the suggested corrective actions were simple, low-cost and possible to implement immediately, as these were related to working habits in mammography units, such as film processing and darkroom conditions. It has been demonstrated how simple quantitative assessment of image quality can be used for optimisation purposes. Analysis of image quality parameters as OD, gradient and contrast demonstrated general similarities between mammography practices in Croatia and Serbia. The applied methodology should be expanded to larger number of hospitals and applied on a regular basis. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.

  14. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  15. Special purpose computer system with highly parallel pipelines for flow visualization using holography technology

    NASA Astrophysics Data System (ADS)

    Masuda, Nobuyuki; Sugie, Takashige; Ito, Tomoyoshi; Tanaka, Shinjiro; Hamada, Yu; Satake, Shin-ichi; Kunugi, Tomoaki; Sato, Kazuho

    2010-12-01

    We have designed a PC cluster system with special purpose computer boards for visualization of fluid flow using digital holographic particle tracking velocimetry (DHPTV). In this board, there is a Field Programmable Gate Array (FPGA) chip in which is installed a pipeline for calculating the intensity of an object from a hologram by fast Fourier transform (FFT). This cluster system can create 1024 reconstructed images from a 1024×1024-grid hologram in 0.77 s. It is expected that this system will contribute to the analysis of fluid flow using DHPTV.

  16. Research of processes of reception and analysis of dynamic digital medical images in hardware/software complexes used for diagnostics and treatment of cardiovascular diseases

    NASA Astrophysics Data System (ADS)

    Karmazikov, Y. V.; Fainberg, E. M.

    2005-06-01

    Work with DICOM compatible equipment integrated into hardware and software systems for medical purposes has been considered. Structures of process of reception and translormation of the data are resulted by the example of digital rentgenography and angiography systems, included in hardware-software complex DIMOL-IK. Algorithms of reception and the analysis of the data are offered. Questions of the further processing and storage of the received data are considered.

  17. Development and Testing of a 212Pb/212Bi Peptide for Targeting Metastatic Melanoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisher, Darrell R.

    2012-10-25

    The purpose of this project is to develop a new radiolabeled peptide for imaging and treating metastatic melanoma. The immunoconjugate consists of a receptor-specific peptide that targets melanoma cells. The beta-emitter lead-212 (half-life = 10.4 hours) is linked by coordination chemistry to the peptide. After injection, the peptide targets melanoma receptors on the surfaces of melanoma cells. Lead-212 decays to the alpha-emitter bismuth-212 (half-life = 60 minutes). Alpha-particles that hit melanoma cell nuclei are likely to kill the melanoma cell. For cancer cell imaging, the lead-212 is replaced by lead-203 (half-life = 52 hours). Lead-203 emits 279 keV photons (80.1%more » abundance) that can be imaged and measured for biodistribution analysis, cancer imaging, and quantitative dosimetry.« less

  18. Objective assessment in digital images of skin erythema caused by radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsubara, H., E-mail: matubara@nirs.go.jp; Matsufuji, N.; Tsuji, H.

    Purpose: Skin toxicity caused by radiotherapy has been visually classified into discrete grades. The present study proposes an objective and continuous assessment method of skin erythema in digital images taken under arbitrary lighting conditions, which is the case for most clinical environments. The purpose of this paper is to show the feasibility of the proposed method. Methods: Clinical data were gathered from six patients who received carbon beam therapy for lung cancer. Skin condition was recorded using an ordinary compact digital camera under unfixed lighting conditions; a laser Doppler flowmeter was used to measure blood flow in the skin. Themore » photos and measurements were taken at 3 h, 30, and 90 days after irradiation. Images were decomposed into hemoglobin and melanin colors using independent component analysis. Pixel values in hemoglobin color images were compared with skin dose and skin blood flow. The uncertainty of the practical photographic method was also studied in nonclinical experiments. Results: The clinical data showed good linearity between skin dose, skin blood flow, and pixel value in the hemoglobin color images; their correlation coefficients were larger than 0.7. It was deduced from the nonclinical that the uncertainty due to the proposed method with photography was 15%; such an uncertainty was not critical for assessment of skin erythema in practical use. Conclusions: Feasibility of the proposed method for assessment of skin erythema using digital images was demonstrated. The numerical relationship obtained helped to predict skin erythema by artificial processing of skin images. Although the proposed method using photographs taken under unfixed lighting conditions increased the uncertainty of skin information in the images, it was shown to be powerful for the assessment of skin conditions because of its flexibility and adaptability.« less

  19. Using Purpose-Built Functions and Block Hashes to Enable Small Block and Sub-file Forensics

    DTIC Science & Technology

    2010-01-01

    JPEGs. We tested precarve using the nps-2009-canon2-gen6 (Garfinkel et al., 2009) disk image. The disk image was created with a 32 MB SD card and a...analysis of n-grams in the fragment. Fig. 1 e Usage of a 160 GB iPod reported by iTunes 8.2.1 (6) (top), as reported by the file system (bottom center), and...as computing with random sampling (bottom right). Note that iTunes usage actually in GiB, even though the program displays the “GB” label. Fig. 2 e

  20. [Use of blue and green systems of image visualization in roentgenology].

    PubMed

    Riuduger, Iu G

    2004-01-01

    The main peculiarities of two image visualization systems related with the specificity of intensifying screens and of radiographic films in each of them are discussed. Specific features of kinetic development of modern orthochromatic general-purpose radiographic films were studied versus those of the traditional films; differences related with radiation hardness of some of the intensifying screen manufactured in Russia were investigated. Some practical advice was suggested on the basis of a conducted analysis of the "green" system specificity; such advice provides for reorienting the X-ray examination room, in Russia, for gadolinium screens and modern radiography films.

  1. A New Effort for Atmospherical Forecast: Meteorological Image Processing Software (MIPS) for Astronomical Observations

    NASA Astrophysics Data System (ADS)

    Shameoni Niaei, M.; Kilic, Y.; Yildiran, B. E.; Yüzlükoglu, F.; Yesilyaprak, C.

    2016-12-01

    We have described a new software (MIPS) about the analysis and image processing of the meteorological satellite (Meteosat) data for an astronomical observatory. This software will be able to help to make some atmospherical forecast (cloud, humidity, rain) using meteosat data for robotic telescopes. MIPS uses a python library for Eumetsat data that aims to be completely open-source and licenced under GNU/General Public Licence (GPL). MIPS is a platform independent and uses h5py, numpy, and PIL with the general-purpose and high-level programming language Python and the QT framework.

  2. VizieR Online Data Catalog: Stellar and planet properties for K2 candidates (Montet+, 2015)

    NASA Astrophysics Data System (ADS)

    Montet, B. T.; Morton, T. D.; Foreman-Mackey, D.; Johnson, J. A.; Hogg, D. W.; Bowler, B. P.; Latham, D. W.; Bieryla, A.; Mann, A. W.

    2017-09-01

    In this paper, we present stellar and planetary parameters for each system. We also analyze the false positive probability (FPP) of each system using vespa, a new publicly available, general-purpose implementation of the Morton (2012ApJ...761....6M) procedure to calculate FPPs for transiting planets. Through this analysis, as well as archival imaging, ground-based seeing-limited survey data, and adaptive optics imaging, we are able to confirm 21 of these systems as transiting planets at the 99% confidence level. Additionally, we identify six systems as false positives. (5 data files).

  3. TH-CD-202-02: A Preliminary Study Evaluating Beam-Hardening Artifact Reduction On CT Direct Electron-Density Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, H; Dolly, S; Zhao, T

    Purpose: A prototype reconstruction algorithm that can provide direct electron density (ED) images from single energy CT scans is being currently developed by Siemens Healthcare GmbH. This feature can eliminate the need for kV specific calibration curve for radiation treatemnt planning. An added benefit is that beam-hardening artifacts are also reduced on direct-ED images due to the underlying material decomposition. This study is to quantitatively analyze the reduction of beam-hardening artifacts on direct-ED images and suggest additional clinical usages. Methods: HU and direct-ED images were reconstructed on a head phantom scanned on a Siemens Definition AS CT scanner at fivemore » tube potentials of 70kV, 80kV, 100kV, 120kV and 140kV respectively. From these images, mean, standard deviation (SD), and local NPS were calculated for regions of interest (ROI) of same locations and sizes. A complete analysis of beam-hardening artifact reduction and image quality improvement was conducted. Results: Along with the increase of tube potentials, ROI means and SDs decrease on both HU and direct-ED images. The mean value differences between HU and direct-ED images are up to 8% with absolute value of 2.9. Compared to that on HU images, the SDs are lower on direct-ED images, and the differences are up to 26%. Interestingly, the local NPS calculated from direct-ED images shows consistent values in the low spatial frequency domain for images acquired from all tube potential settings, while varied dramatically on HU images. This also confirms the beam -hardening artifact reduction on ED images. Conclusions: The low SDs on direct-ED images and relative consistent NPS values in the low spatial frequency domain indicate a reduction of beam-hardening artifacts. The direct-ED image has the potential to assist in more accurate organ contouring, and is a better fit for the desired purpose of CT simulations for radiotherapy.« less

  4. Temporal Lobe Epilepsy: Quantitative MR Volumetry in Detection of Hippocampal Atrophy

    PubMed Central

    Farid, Nikdokht; Girard, Holly M.; Kemmotsu, Nobuko; Smith, Michael E.; Magda, Sebastian W.; Lim, Wei Y.; Lee, Roland R.

    2012-01-01

    Purpose: To determine the ability of fully automated volumetric magnetic resonance (MR) imaging to depict hippocampal atrophy (HA) and to help correctly lateralize the seizure focus in patients with temporal lobe epilepsy (TLE). Materials and Methods: This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Volumetric MR imaging data were analyzed for 34 patients with TLE and 116 control subjects. Structural volumes were calculated by using U.S. Food and Drug Administration–cleared software for automated quantitative MR imaging analysis (NeuroQuant). Results of quantitative MR imaging were compared with visual detection of atrophy, and, when available, with histologic specimens. Receiver operating characteristic analyses were performed to determine the optimal sensitivity and specificity of quantitative MR imaging for detecting HA and asymmetry. A linear classifier with cross validation was used to estimate the ability of quantitative MR imaging to help lateralize the seizure focus. Results: Quantitative MR imaging–derived hippocampal asymmetries discriminated patients with TLE from control subjects with high sensitivity (86.7%–89.5%) and specificity (92.2%–94.1%). When a linear classifier was used to discriminate left versus right TLE, hippocampal asymmetry achieved 94% classification accuracy. Volumetric asymmetries of other subcortical structures did not improve classification. Compared with invasive video electroencephalographic recordings, lateralization accuracy was 88% with quantitative MR imaging and 85% with visual inspection of volumetric MR imaging studies but only 76% with visual inspection of clinical MR imaging studies. Conclusion: Quantitative MR imaging can depict the presence and laterality of HA in TLE with accuracy rates that may exceed those achieved with visual inspection of clinical MR imaging studies. Thus, quantitative MR imaging may enhance standard visual analysis, providing a useful and viable means for translating volumetric analysis into clinical practice. © RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12112638/-/DC1 PMID:22723496

  5. Mariner 6 and 7 picture analysis

    NASA Technical Reports Server (NTRS)

    Leighton, R. B.

    1975-01-01

    Analysis of Mariner 6 and 7 far-encounter (FE) pictures is discussed. The purpose of the studies was to devise ways to combine digital data from the full set of FE pictures so as to improve surface resolution, distinguish clouds and haze patches from permanent surface topographic markings, deduce improved values for radius, oblateness, and spin-axis orientation, and produce a composite photographic map of Mars. Attempts to measure and correct camera distortions, locate each image in the frame, and convert image coordinates to martian surface coordinates were highly successful; residual uncertainties in location were considerably less than one pixel. However, analysis of the data to improve the radius, figure, and axial tilt and to produce a composite map was curtailed because of the superior data provided by Mariner 9. The data, programs, and intermediate results are still available (1976), and the project could be resumed with little difficulty.

  6. LV dyssynchrony as assessed by phase analysis of gated SPECT myocardial perfusion imaging in patients with Wolff-Parkinson-White syndrome

    PubMed Central

    Chen, Chun; Miao, Changqing; Feng, Jianlin; Zhou, Yanli; Cao, Kejiang; Lloyd, Michael S.; Chen, Ji

    2013-01-01

    Purpose The purpose of this study was to evaluate left ventricular (LV) mechanical dyssynchrony in patients with Wolff-Parkinson-White (WPW) syndrome pre- and post-radiofrequency catheter ablation (RFA) using phase analysis of gated single photon emission computed tomography (SPECT) myocardial perfusion imaging (MPI). Methods Forty-five WPW patients were enrolled and had gated SPECT MPI pre- and 2–3 days post-RFA. Electrophysiological study (EPS) was used to locate accessory pathways (APs) and categorize the patients according to the AP locations (septal, left and right free wall). Electrocardiography (ECG) was performed pre- and post-RFA to confirm successful elimination of the APs. Phase analysis of gated SPECT MPI was used to assess LV dyssynchrony pre- and post-RFA. Results Among the 45 patients, 3 had gating errors, and thus 42 had SPECT phase analysis. Twenty-two patients (52.4 %) had baseline LV dyssynchrony. Baseline LV dyssynchrony was more prominent in the patients with septal APs than in the patients with left or right APs (p<0.05). RFA improved LV synchrony in the entire cohort and in the patients with septal APs (p<0.01). Conclusion Phase analysis of gated SPECT MPI demonstrated that LV mechanical dyssynchrony can be present in patients with WPW syndrome. Septal APs result in the greatest degree of LV mechanical dyssynchrony and afford the most benefit after RFA. This study supports further investigation in the relationship between electrical and mechanical activation using EPS and phase analysis of gated SPECT MPI. PMID:22532253

  7. The Diagnostic Efficacy of Cone-beam Computed Tomography in Endodontics: A Systematic Review and Analysis by a Hierarchical Model of Efficacy.

    PubMed

    Rosen, Eyal; Taschieri, Silvio; Del Fabbro, Massimo; Beitlitum, Ilan; Tsesis, Igor

    2015-07-01

    The aim of this study was to evaluate the diagnostic efficacy of cone-beam computed tomographic (CBCT) imaging in endodontics based on a systematic search and analysis of the literature using an efficacy model. A systematic search of the literature was performed to identify studies evaluating the use of CBCT imaging in endodontics. The identified studies were subjected to strict inclusion criteria followed by an analysis using a hierarchical model of efficacy (model) designed for appraisal of the literature on the levels of efficacy of a diagnostic imaging modality. Initially, 485 possible relevant articles were identified. After title and abstract screening and a full-text evaluation, 58 articles (12%) that met the inclusion criteria were analyzed and allocated to levels of efficacy. Most eligible articles (n = 52, 90%) evaluated technical characteristics or the accuracy of CBCT imaging, which was defined in this model as low levels of efficacy. Only 6 articles (10%) proclaimed to evaluate the efficacy of CBCT imaging to support the practitioner's decision making; treatment planning; and, ultimately, the treatment outcome, which was defined as higher levels of efficacy. The expected ultimate benefit of CBCT imaging to the endodontic patient as evaluated by its level of diagnostic efficacy is unclear and is mainly limited to its technical and diagnostic accuracy efficacies. Even for these low levels of efficacy, current knowledge is limited. Therefore, a cautious and rational approach is advised when considering CBCT imaging for endodontic purposes. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  8. Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum †

    PubMed Central

    Kresnaraman, Brahmastro; Deguchi, Daisuke; Takahashi, Tomokazu; Mekada, Yoshito; Ide, Ichiro; Murase, Hiroshi

    2016-01-01

    During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA). The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations. PMID:27110781

  9. MULTISCALE TENSOR ANISOTROPIC FILTERING OF FLUORESCENCE MICROSCOPY FOR DENOISING MICROVASCULATURE.

    PubMed

    Prasath, V B S; Pelapur, R; Glinskii, O V; Glinsky, V V; Huxley, V H; Palaniappan, K

    2015-04-01

    Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising process is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale tensor with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves denoising accuracy of different tensor diffusion methods to obtain better microvasculature segmentation.

  10. Uncooled thermal imaging and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Shiyun; Chang, Benkang; Yu, Chunyu; Zhang, Junju; Sun, Lianjun

    2006-09-01

    Thermal imager can transfer difference of temperature to difference of electric signal level, so can be application to medical treatment such as estimation of blood flow speed and vessel 1ocation [1], assess pain [2] and so on. With the technology of un-cooled focal plane array (UFPA) is grown up more and more, some simple medical function can be completed with un-cooled thermal imager, for example, quick warning for fever heat with SARS. It is required that performance of imaging is stabilization and spatial and temperature resolution is high enough. In all performance parameters, noise equivalent temperature difference (NETD) is often used as the criterion of universal performance. 320 x 240 α-Si micro-bolometer UFPA has been applied widely presently for its steady performance and sensitive responsibility. In this paper, NETD of UFPA and the relation between NETD and temperature are researched. several vital parameters that can affect NETD are listed and an universal formula is presented. Last, the images from the kind of thermal imager are analyzed based on the purpose of detection persons with fever heat. An applied thermal image intensification method is introduced.

  11. Image processing and pattern recognition with CVIPtools MATLAB toolbox: automatic creation of masks for veterinary thermographic images

    NASA Astrophysics Data System (ADS)

    Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph

    2016-09-01

    CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.

  12. Performance evaluation of the multiple-image optical compression and encryption method by increasing the number of target images

    NASA Astrophysics Data System (ADS)

    Aldossari, M.; Alfalou, A.; Brosseau, C.

    2017-08-01

    In an earlier study [Opt. Express 22, 22349-22368 (2014)], a compression and encryption method that simultaneous compress and encrypt closely resembling images was proposed and validated. This multiple-image optical compression and encryption (MIOCE) method is based on a special fusion of the different target images spectra in the spectral domain. Now for the purpose of assessing the capacity of the MIOCE method, we would like to evaluate and determine the influence of the number of target images. This analysis allows us to evaluate the performance limitation of this method. To achieve this goal, we use a criterion based on the root-mean-square (RMS) [Opt. Lett. 35, 1914-1916 (2010)] and compression ratio to determine the spectral plane area. Then, the different spectral areas are merged in a single spectrum plane. By choosing specific areas, we can compress together 38 images instead of 26 using the classical MIOCE method. The quality of the reconstructed image is evaluated by making use of the mean-square-error criterion (MSE).

  13. Feasibility of imaging superficial palmar arch using micro-ultrasound, 7T and 3T magnetic resonance imaging.

    PubMed

    Pruzan, Alison N; Kaufman, Audrey E; Calcagno, Claudia; Zhou, Yu; Fayad, Zahi A; Mani, Venkatesh

    2017-02-28

    To demonstrate feasibility of vessel wall imaging of the superficial palmar arch using high frequency micro-ultrasound, 7T and 3T magnetic resonance imaging (MRI). Four subjects (ages 22-50 years) were scanned on a micro-ultrasound system with a 45-MHz transducer (Vevo 2100, VisualSonics). Subjects' hands were then imaged on a 3T clinical MR scanner (Siemens Biograph MMR) using an 8-channel special purpose phased array carotid coil. Lastly, subjects' hands were imaged on a 7T clinical MR scanner (Siemens Magnetom 7T Whole Body Scanner) using a custom built 8-channel transmit receive carotid coil. All three imaging modalities were subjectively analyzed for image quality and visualization of the vessel wall. Results of this very preliminary study indicated that vessel wall imaging of the superficial palmar arch was feasible with a whole body 7T and 3T MRI in comparison with micro-ultrasound. Subjective analysis of image quality (1-5 scale, 1: poorest, 5: best) from B mode, ultrasound, 3T SPACE MRI and 7T SPACE MRI indicated that the image quality obtained at 7T was superior to both 3T MRI and micro-ultrasound. The 3D SPACE sequence at both 7T and 3T MRI with isotropic voxels allowed for multi-planar reformatting of images and allowed for less operator dependent results as compared to high frequency micro-ultrasound imaging. Although quantitative analysis revealed that there was no significant difference between the three methods, the 7T Tesla trended to have better visibility of the vessel and its wall. Imaging of smaller arteries at the 7T is feasible for evaluating atherosclerosis burden and may be of clinical relevance in multiple diseases.

  14. Automated classification and quantitative analysis of arterial and venous vessels in fundus images

    NASA Astrophysics Data System (ADS)

    Alam, Minhaj; Son, Taeyoon; Toslak, Devrim; Lim, Jennifer I.; Yao, Xincheng

    2018-02-01

    It is known that retinopathies may affect arteries and veins differently. Therefore, reliable differentiation of arteries and veins is essential for computer-aided analysis of fundus images. The purpose of this study is to validate one automated method for robust classification of arteries and veins (A-V) in digital fundus images. We combine optical density ratio (ODR) analysis and blood vessel tracking algorithm to classify arteries and veins. A matched filtering method is used to enhance retinal blood vessels. Bottom hat filtering and global thresholding are used to segment the vessel and skeleton individual blood vessels. The vessel tracking algorithm is used to locate the optic disk and to identify source nodes of blood vessels in optic disk area. Each node can be identified as vein or artery using ODR information. Using the source nodes as starting point, the whole vessel trace is then tracked and classified as vein or artery using vessel curvature and angle information. 50 color fundus images from diabetic retinopathy patients were used to test the algorithm. Sensitivity, specificity, and accuracy metrics were measured to assess the validity of the proposed classification method compared to ground truths created by two independent observers. The algorithm demonstrated 97.52% accuracy in identifying blood vessels as vein or artery. A quantitative analysis upon A-V classification showed that average A-V ratio of width for NPDR subjects with hypertension decreased significantly (43.13%).

  15. How does C-VIEW image quality compare with conventional 2D FFDM?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Jeffrey S., E-mail: nelson.jeffrey@duke.edu; Wells, Jered R.; Baker, Jay A.

    Purpose: The FDA approved the use of digital breast tomosynthesis (DBT) in 2011 as an adjunct to 2D full field digital mammography (FFDM) with the constraint that all DBT acquisitions must be paired with a 2D image to assure adequate interpretative information is provided. Recently manufacturers have developed methods to provide a synthesized 2D image generated from the DBT data with the hope of sparing patients the radiation exposure from the FFDM acquisition. While this much needed alternative effectively reduces the total radiation burden, differences in image quality must also be considered. The goal of this study was to comparemore » the intrinsic image quality of synthesized 2D C-VIEW and 2D FFDM images in terms of resolution, contrast, and noise. Methods: Two phantoms were utilized in this study: the American College of Radiology mammography accreditation phantom (ACR phantom) and a novel 3D printed anthropomorphic breast phantom. Both phantoms were imaged using a Hologic Selenia Dimensions 3D system. Analysis of the ACR phantom includes both visual inspection and objective automated analysis using in-house software. Analysis of the 3D anthropomorphic phantom includes visual assessment of resolution and Fourier analysis of the noise. Results: Using ACR-defined scoring criteria for the ACR phantom, the FFDM images scored statistically higher than C-VIEW according to both the average observer and automated scores. In addition, between 50% and 70% of C-VIEW images failed to meet the nominal minimum ACR accreditation requirements—primarily due to fiber breaks. Software analysis demonstrated that C-VIEW provided enhanced visualization of medium and large microcalcification objects; however, the benefits diminished for smaller high contrast objects and all low contrast objects. Visual analysis of the anthropomorphic phantom showed a measureable loss of resolution in the C-VIEW image (11 lp/mm FFDM, 5 lp/mm C-VIEW) and loss in detection of small microcalcification objects. Spectral analysis of the anthropomorphic phantom showed higher total noise magnitude in the FFDM image compared with C-VIEW. Whereas the FFDM image contained approximately white noise texture, the C-VIEW image exhibited marked noise reduction at midfrequency and high frequency with far less noise suppression at low frequencies resulting in a mottled noise appearance. Conclusions: Their analysis demonstrates many instances where the C-VIEW image quality differs from FFDM. Compared to FFDM, C-VIEW offers a better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties. Based on these findings, the utilization of C-VIEW images in the clinical setting requires careful consideration, especially if considering the discontinuation of FFDM imaging. Not explicitly explored in this study is how the combination of DBT + C-VIEW performs relative to DBT + FFDM or FFDM alone.« less

  16. TU-CD-BRB-08: Radiomic Analysis of FDG-PET Identifies Novel Prognostic Imaging Biomarkers in Locally Advanced Pancreatic Cancer Patients Treated with SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Y; Shirato, H; Song, J

    2015-06-15

    Purpose: This study aims to identify novel prognostic imaging biomarkers in locally advanced pancreatic cancer (LAPC) using quantitative, high-throughput image analysis. Methods: 86 patients with LAPC receiving chemotherapy followed by SBRT were retrospectively studied. All patients had a baseline FDG-PET scan prior to SBRT. For each patient, we extracted 435 PET imaging features of five types: statistical, morphological, textural, histogram, and wavelet. These features went through redundancy checks, robustness analysis, as well as a prescreening process based on their concordance indices with respect to the relevant outcomes. We then performed principle component analysis on the remaining features (number ranged frommore » 10 to 16), and fitted a Cox proportional hazard regression model using the first 3 principle components. Kaplan-Meier analysis was used to assess the ability to distinguish high versus low-risk patients separated by median predicted survival. To avoid overfitting, all evaluations were based on leave-one-out cross validation (LOOCV), in which each holdout patient was assigned to a risk group according to the model obtained from a separate training set. Results: For predicting overall survival (OS), the most dominant imaging features were wavelet coefficients. There was a statistically significant difference in OS between patients with predicted high and low-risk based on LOOCV (hazard ratio: 2.26, p<0.001). Similar imaging features were also strongly associated with local progression-free survival (LPFS) (hazard ratio: 1.53, p=0.026) on LOOCV. In comparison, neither SUVmax nor TLG was associated with LPFS (p=0.103, p=0.433) (Table 1). Results for progression-free survival and distant progression-free survival showed similar trends. Conclusion: Radiomic analysis identified novel imaging features that showed improved prognostic value over conventional methods. These features characterize the degree of intra-tumor heterogeneity reflected on FDG-PET images, and their biological underpinnings warrant further investigation. If validated in large, prospective cohorts, this method could be used to stratify patients based on individualized risk.« less

  17. The most-cited articles in pediatric imaging: a bibliometric analysis.

    PubMed

    Hong, Su J; Lim, Kyoung J; Yoon, Dae Y; Choi, Chul S; Yun, Eun J; Seo, Young L; Cho, Young K; Yoon, Soo J; Moon, Ji Y; Baek, Sora; Lim, Yun-Jung; Lee, Kwanseop

    2017-07-27

    The number of citations that an article has received reflects its impact on the scientific community. The purpose of our study was to identify and characterize the 51 most-cited articles in pediatric imaging. Based on the database of Journal Citation Reports, we selected 350 journals that were considered as potential outlets for pediatric imaging articles. The Web of Science search tools were used to identify the most-cited articles relevant to pediatric imaging within the selected journals. The 51 most-cited articles in pediatric imaging were published between 1952 and 2011, with 1980- 1989 and 2000-2009 producing 15 articles, each. The number of citations ranged from 576-124 and the number of annual citations ranged from 49.05-2.56. The majority of articles were published in pediatric and related journals (n=26), originated in the United States (n=23), were original articles (n=45), used MRI as imaging modality (n=27), and were concerned with the subspecialty of brain (n=34). University College London School of Medicine (n=6) and School of Medicine University of California (n=4) were the leading institutions and Reynolds EO (n=7) was the most voluminous author. Our study presents a detailed list and an analysis of the most-cited articles in the field of pediatric imaging, which provides an insight into historical developments and allows for recognition of the important advances in this field.

  18. Cryo-image Analysis of Tumor Cell Migration, Invasion, and Dispersal in a Mouse Xenograft Model of Human Glioblastoma Multiforme

    PubMed Central

    Qutaish, Mohammed Q.; Sullivant, Kristin E.; Burden-Gulley, Susan M.; Lu, Hong; Roy, Debashish; Wang, Jing; Basilion, James P.; Brady-Kalnay, Susann M.; Wilson, David L.

    2012-01-01

    Purpose The goals of this study were to create cryo-imaging methods to quantify characteristics (size, dispersal, and blood vessel density) of mouse orthotopic models of glioblastoma multiforme (GBM) and to enable studies of tumor biology, targeted imaging agents, and theranostic nanoparticles. Procedures Green fluorescent protein-labeled, human glioma LN-229 cells were implanted into mouse brain. At 20–38 days, cryo-imaging gave whole brain, 4-GB, 3D microscopic images of bright field anatomy, including vasculature, and fluorescent tumor. Image analysis/visualization methods were developed. Results Vessel visualization and segmentation methods successfully enabled analyses. The main tumor mass volume, the number of dispersed clusters, the number of cells/cluster, and the percent dispersed volume all increase with age of the tumor. Histograms of dispersal distance give a mean and median of 63 and 56 μm, respectively, averaged over all brains. Dispersal distance tends to increase with age of the tumors. Dispersal tends to occur along blood vessels. Blood vessel density did not appear to increase in and around the tumor with this cell line. Conclusion Cryo-imaging and software allow, for the first time, 3D, whole brain, microscopic characterization of a tumor from a particular cell line. LN-229 exhibits considerable dispersal along blood vessels, a characteristic of human tumors that limits treatment success. PMID:22125093

  19. A Multi-Dimensional Approach to Gradient Change in Phonological Acquisition: A Case Study of Disordered Speech Development

    ERIC Educational Resources Information Center

    Glaspey, Amy M.; MacLeod, Andrea A. N.

    2010-01-01

    The purpose of the current study is to document phonological change from a multidimensional perspective for a 3-year-old boy with phonological disorder by comparing three measures: (1) accuracy of consonant productions, (2) dynamic assessment, and (3) acoustic analysis. The methods included collecting a sample of the targets /s, [image omitted],…

  20. Characteristics of Vocal Fold Vibrations in Vocally Healthy Subjects: Analysis with Multi-Line Kymography

    ERIC Educational Resources Information Center

    Yamauchi, Akihito; Imagawa, Hiroshi; Sakakibara, Ken-Ichi; Yokonishi, Hisayuki; Nito, Takaharu; Yamasoba, Tatsuya; Tayama, Niro

    2014-01-01

    Purpose: In this study, the authors aimed to analyze longitudinal data from high-speed digital images in normative subjects using multi-line kymography. Method: Vocally healthy subjects were divided into young (9 men and 17 women; M[subscript age] = 27 years) and older groups (8 men and 12 women; M[subscript age] = 73 years). From high-speed…

  1. An Analysis of the Effectiveness of Individualized Reading Instruction Upon Self-Concept of Disadvantaged Students with Reading Disabilities.

    ERIC Educational Resources Information Center

    Marble, James Marion

    The major purpose of this study was to investigate the possibility that individualized instruction could improve the self-image of children with reading problems. Subjects for the experimental and control groups were selected from five classes of fifth grade students from a predominantly rural, isolated area in Mississippi. The Sears Self-Concept…

  2. A survey of landmine detection using hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Makki, Ihab; Younes, Rafic; Francis, Clovis; Bianchi, Tiziano; Zucchetti, Massimo

    2017-02-01

    Hyperspectral imaging is a trending technique in remote sensing that finds its application in many different areas, such as agriculture, mapping, target detection, food quality monitoring, etc. This technique gives the ability to remotely identify the composition of each pixel of the image. Therefore, it is a natural candidate for the purpose of landmine detection, thanks to its inherent safety and fast response time. In this paper, we will present the results of several studies that employed hyperspectral imaging for the purpose of landmine detection, discussing the different signal processing techniques used in this framework for hyperspectral image processing and target detection. Our purpose is to highlight the progresses attained in the detection of landmines using hyperspectral imaging and to identify possible perspectives for future work, in order to achieve a better detection in real-time operation mode.

  3. Wavelet Analysis for Wind Fields Estimation

    PubMed Central

    Leite, Gladeston C.; Ushizima, Daniela M.; Medeiros, Fátima N. S.; de Lima, Gilson G.

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B3 spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms−1. Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699

  4. Automated site characterization for robotic sample acquisition systems

    NASA Astrophysics Data System (ADS)

    Scholl, Marija S.; Eberlein, Susan J.

    1993-04-01

    A mobile, semiautonomous vehicle with multiple sensors and on-board intelligence is proposed for performing preliminary scientific investigations on extraterrestrial bodies prior to human exploration. Two technologies, a hybrid optical-digital computer system based on optical correlator technology and an image and instrument data analysis system, provide complementary capabilities that might be part of an instrument package for an intelligent robotic vehicle. The hybrid digital-optical vision system could perform real-time image classification tasks using an optical correlator with programmable matched filters under control of a digital microcomputer. The data analysis system would analyze visible and multiband imagery to extract mineral composition and textural information for geologic characterization. Together these technologies would support the site characterization needs of a robotic vehicle for both navigational and scientific purposes.

  5. Sweet-spot training for early esophageal cancer detection

    NASA Astrophysics Data System (ADS)

    van der Sommen, Fons; Zinger, Svitlana; Schoon, Erik J.; de With, Peter H. N.

    2016-03-01

    Over the past decade, the imaging tools for endoscopists have improved drastically. This has enabled physicians to visually inspect the intestinal tissue for early signs of malignant lesions. Besides this, recent studies show the feasibility of supportive image analysis for endoscopists, but the analysis problem is typically approached as a segmentation task where binary ground truth is employed. In this study, we show that the detection of early cancerous tissue in the gastrointestinal tract cannot be approached as a binary segmentation problem and it is crucial and clinically relevant to involve multiple experts for annotating early lesions. By employing the so-called sweet spot for training purposes as a metric, a much better detection performance can be achieved. Furthermore, a multi-expert-based ground truth, i.e. a golden standard, enables an improved validation of the resulting delineations. For this purpose, besides the sweet spot we also propose another novel metric, the Jaccard Golden Standard (JIGS) that can handle multiple ground-truth annotations. Our experiments involving these new metrics and based on the golden standard show that the performance of a detection algorithm of early neoplastic lesions in Barrett's esophagus can be increased significantly, demonstrating a 10 percent point increase in the resulting F1 detection score.

  6. Copyright Information

    Atmospheric Science Data Center

    2013-03-25

    ... for educational or informational purposes, including photo collections, textbooks, public exhibits, and Internet web pages.   ... endorsement of commercial goods or services. If a NASA image includes an identifiable person, using the image for commercial purposes ...

  7. Peripheral Quantitative CT (pQCT) Using a Dedicated Extremity Cone-Beam CT Scanner

    PubMed Central

    Muhit, A. A.; Arora, S.; Ogawa, M.; Ding, Y.; Zbijewski, W.; Stayman, J. W.; Thawait, G.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Bingham, C.O.; Means, K.; Carrino, J. A.; Siewerdsen, J. H.

    2014-01-01

    Purpose We describe the initial assessment of the peripheral quantitative CT (pQCT) imaging capabilities of a cone-beam CT (CBCT) scanner dedicated to musculoskeletal extremity imaging. The aim is to accurately measure and quantify bone and joint morphology using information automatically acquired with each CBCT scan, thereby reducing the need for a separate pQCT exam. Methods A prototype CBCT scanner providing isotropic, sub-millimeter spatial resolution and soft-tissue contrast resolution comparable or superior to standard multi-detector CT (MDCT) has been developed for extremity imaging, including the capability for weight-bearing exams and multi-mode (radiography, fluoroscopy, and volumetric) imaging. Assessment of pQCT performance included measurement of bone mineral density (BMD), morphometric parameters of subchondral bone architecture, and joint space analysis. Measurements employed phantoms, cadavers, and patients from an ongoing pilot study imaged with the CBCT prototype (at various acquisition, calibration, and reconstruction techniques) in comparison to MDCT (using pQCT protocols for analysis of BMD) and micro-CT (for analysis of subchondral morphometry). Results The CBCT extremity scanner yielded BMD measurement within ±2–3% error in both phantom studies and cadaver extremity specimens. Subchondral bone architecture (bone volume fraction, trabecular thickness, degree of anisotropy, and structure model index) exhibited good correlation with gold standard micro-CT (error ~5%), surpassing the conventional limitations of spatial resolution in clinical MDCT scanners. Joint space analysis demonstrated the potential for sensitive 3D joint space mapping beyond that of qualitative radiographic scores in application to non-weight-bearing versus weight-bearing lower extremities and assessment of phalangeal joint space integrity in the upper extremities. Conclusion The CBCT extremity scanner demonstrated promising initial results in accurate pQCT analysis from images acquired with each CBCT scan. Future studies will include improved x-ray scatter correction and image reconstruction techniques to further improve accuracy and to correlate pQCT metrics with known pathology. PMID:25076823

  8. The COST Action IC0604 "Telepathology Network in Europe" (EURO-TELEPATH).

    PubMed

    García-Rojo, Marcial; Gonçalves, Luís; Blobel, Bernd

    2012-01-01

    The COST Action IC0604 "Telepathology Network in Europe" (EURO-TELEPATH) is a European COST Action that has been running from 2007 to 2011. COST Actions are funded by the COST (European Cooperation in the field of Scientific and Technical Research) Agency, supported by the Seventh Framework Programme for Research and Technological Development (FP7), of the European Union. EURO-TELEPATH's main objectives were evaluating and validating the common technological framework and communication standards required to access, transmit and manage digital medical records by pathologists and other medical professionals in a networked environment. The project was organized in four working groups. orking Group 1 "Business modeling in pathology" has designed main pathology processes - Frozen Study, Formalin Fixed Specimen Study, Telepathology, Cytology, and Autopsy -using Business Process Modeling Notation (BPMN). orking Group 2 "Informatics standards in pathology" has been dedicated to promoting the development and application of informatics standards in pathology, collaborating with Integrating the Healthcare Enterprise (IHE), Digital Imaging and Communications in Medicine (DICOM), Health Level Seven (HL7), and other standardization bodies. Working Group 3 "Images: Analysis, Processing, Retrieval and Management" worked on the use of virtual or digital slides that are fostering the use of image processing and analysis in pathology not only for research purposes, but also in daily practice. Working Group 4 "Technology and Automation in Pathology" was focused on studying the adequacy of current existing technical solutions, including, e.g., the quality of images obtained by slide scanners, or the efficiency of image analysis applications. Major outcome of this action are the collaboration with international health informatics standardization bodies to foster the development of standards for digital pathology, offering a new approach for workflow analysis, based in business process modeling. Health terminology standardization research has become a topic of high interest. Future research work should focus on standardization of automatic image analysis and tissue microarrays imaging.

  9. Mammographic quantitative image analysis and biologic image composition for breast lesion characterization and classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drukker, Karen, E-mail: kdrukker@uchicago.edu; Giger, Maryellen L.; Li, Hui

    2014-03-15

    Purpose: To investigate whether biologic image composition of mammographic lesions can improve upon existing mammographic quantitative image analysis (QIA) in estimating the probability of malignancy. Methods: The study population consisted of 45 breast lesions imaged with dual-energy mammography prior to breast biopsy with final diagnosis resulting in 10 invasive ductal carcinomas, 5 ductal carcinomain situ, 11 fibroadenomas, and 19 other benign diagnoses. Analysis was threefold: (1) The raw low-energy mammographic images were analyzed with an established in-house QIA method, “QIA alone,” (2) the three-compartment breast (3CB) composition measure—derived from the dual-energy mammography—of water, lipid, and protein thickness were assessed, “3CBmore » alone”, and (3) information from QIA and 3CB was combined, “QIA + 3CB.” Analysis was initiated from radiologist-indicated lesion centers and was otherwise fully automated. Steps of the QIA and 3CB methods were lesion segmentation, characterization, and subsequent classification for malignancy in leave-one-case-out cross-validation. Performance assessment included box plots, Bland–Altman plots, and Receiver Operating Characteristic (ROC) analysis. Results: The area under the ROC curve (AUC) for distinguishing between benign and malignant lesions (invasive and DCIS) was 0.81 (standard error 0.07) for the “QIA alone” method, 0.72 (0.07) for “3CB alone” method, and 0.86 (0.04) for “QIA+3CB” combined. The difference in AUC was 0.043 between “QIA + 3CB” and “QIA alone” but failed to reach statistical significance (95% confidence interval [–0.17 to + 0.26]). Conclusions: In this pilot study analyzing the new 3CB imaging modality, knowledge of the composition of breast lesions and their periphery appeared additive in combination with existing mammographic QIA methods for the distinction between different benign and malignant lesion types.« less

  10. Comparative study of pulsed-continuous arterial spin labeling and dynamic susceptibility contrast imaging by histogram analysis in evaluation of glial tumors.

    PubMed

    Arisawa, Atsuko; Watanabe, Yoshiyuki; Tanaka, Hisashi; Takahashi, Hiroto; Matsuo, Chisato; Fujiwara, Takuya; Fujiwara, Masahiro; Fujimoto, Yasunori; Tomiyama, Noriyuki

    2018-06-01

    Arterial spin labeling (ASL) is a non-invasive perfusion technique that may be an alternative to dynamic susceptibility contrast magnetic resonance imaging (DSC-MRI) for assessment of brain tumors. To our knowledge, there have been no reports on histogram analysis of ASL. The purpose of this study was to determine whether ASL is comparable with DSC-MRI in terms of differentiating high-grade and low-grade gliomas by evaluating the histogram analysis of cerebral blood flow (CBF) in the entire tumor. Thirty-four patients with pathologically proven glioma underwent ASL and DSC-MRI. High-signal areas on contrast-enhanced T 1 -weighted images or high-intensity areas on fluid-attenuated inversion recovery images were designated as the volumes of interest (VOIs). ASL-CBF, DSC-CBF, and DSC-cerebral blood volume maps were constructed and co-registered to the VOI. Perfusion histogram analyses of the whole VOI and statistical analyses were performed to compare the ASL and DSC images. There was no significant difference in the mean values for any of the histogram metrics in both of the low-grade gliomas (n = 15) and the high-grade gliomas (n = 19). Strong correlations were seen in the 75th percentile, mean, median, and standard deviation values between the ASL and DSC images. The area under the curve values tended to be greater for the DSC images than for the ASL images. DSC-MRI is superior to ASL for distinguishing high-grade from low-grade glioma. ASL could be an alternative evaluation method when DSC-MRI cannot be used, e.g., in patients with renal failure, those in whom repeated examination is required, and in children.

  11. A picture tells a thousand words: A content analysis of concussion-related images online.

    PubMed

    Ahmed, Osman H; Lee, Hopin; Struik, Laura L

    2016-09-01

    Recently image-sharing social media platforms have become a popular medium for sharing health-related images and associated information. However within the field of sports medicine, and more specifically sports related concussion, the content of images and meta-data shared through these popular platforms have not been investigated. The aim of this study was to analyse the content of concussion-related images and its accompanying meta-data on image-sharing social media platforms. We retrieved 300 images from Pinterest, Instagram and Flickr by using a standardised search strategy. All images were screened and duplicate images were removed. We excluded images if they were: non-static images; illustrations; animations; or screenshots. The content and characteristics of each image was evaluated using a customised coding scheme to determine major content themes, and images were referenced to the current international concussion management guidelines. From 300 potentially relevant images, 176 images were included for analysis; 70 from Pinterest, 63 from Flickr, and 43 from Instagram. Most images were of another person or a scene (64%), with the primary content depicting injured individuals (39%). The primary purposes of the images were to share a concussion-related incident (33%) and to dispense education (19%). For those images where it could be evaluated, the majority (91%) were found to reflect the Sports Concussion Assessment Tool 3 (SCAT3) guidelines. The ability to rapidly disseminate rich information though photos, images, and infographics to a wide-reaching audience suggests that image-sharing social media platforms could be used as an effective communication tool for sports concussion. Public health strategies could direct educative content to targeted populations via the use of image-sharing platforms. Further research is required to understand how image-sharing platforms can be used to effectively relay evidence-based information to patients and sports medicine clinicians. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Comparison of retinal thickness by Fourier-domain optical coherence tomography and OCT retinal image analysis software segmentation analysis derived from Stratus optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Tátrai, Erika; Ranganathan, Sudarshan; Ferencz, Mária; Debuc, Delia Cabrera; Somfai, Gábor Márk

    2011-05-01

    Purpose: To compare thickness measurements between Fourier-domain optical coherence tomography (FD-OCT) and time-domain OCT images analyzed with a custom-built OCT retinal image analysis software (OCTRIMA). Methods: Macular mapping (MM) by StratusOCT and MM5 and MM6 scanning protocols by an RTVue-100 FD-OCT device are performed on 11 subjects with no retinal pathology. Retinal thickness (RT) and the thickness of the ganglion cell complex (GCC) obtained with the MM6 protocol are compared for each early treatment diabetic retinopathy study (ETDRS)-like region with corresponding results obtained with OCTRIMA. RT results are compared by analysis of variance with Dunnett post hoc test, while GCC results are compared by paired t-test. Results: A high correlation is obtained for the RT between OCTRIMA and MM5 and MM6 protocols. In all regions, the StratusOCT provide the lowest RT values (mean difference 43 +/- 8 μm compared to OCTRIMA, and 42 +/- 14 μm compared to RTVue MM6). All RTVue GCC measurements were significantly thicker (mean difference between 6 and 12 μm) than the GCC measurements of OCTRIMA. Conclusion: High correspondence of RT measurements is obtained not only for RT but also for the segmentation of intraretinal layers between FD-OCT and StratusOCT-derived OCTRIMA analysis. However, a correction factor is required to compensate for OCT-specific differences to make measurements more comparable to any available OCT device.

  13. The Influence of University Image on Student Behaviour

    ERIC Educational Resources Information Center

    Alves, Helena; Raposo, Mario

    2010-01-01

    Purpose: The purpose of this paper is to analyse the influence of image on student satisfaction and loyalty. Design/methodology/approach: In order to accomplish the objectives proposed, a model reflecting the influence of image on student satisfaction and loyalty is applied. The model is tested through use of structural equations and the final…

  14. The Translational Role of Diffusion Tensor Image Analysis in Animal Models of Developmental Pathologies

    PubMed Central

    Oguz, Ipek; McMurray, Matthew S.; Styner, Martin; Johns, Josephine M.

    2013-01-01

    Diffusion Tensor Magnetic Resonance Imaging (DTI) has proven itself a powerful technique for clinical investigation of the neurobiological targets and mechanisms underlying developmental pathologies. The success of DTI in clinical studies has demonstrated its great potential for understanding translational animal models of clinical disorders, and preclinical animal researchers are beginning to embrace this new technology to study developmental pathologies. In animal models, genetics can be effectively controlled, drugs consistently administered, subject compliance ensured, and image acquisition times dramatically increased to reduce between-subject variability and improve image quality. When pairing these strengths with the many positive attributes of DTI, such as the ability to investigate microstructural brain organization and connectivity, it becomes possible to delve deeper into the study of both normal and abnormal development. The purpose of this review is to provide new preclinical investigators with an introductory source of information about the analysis of data resulting from small animal DTI studies to facilitate the translation of these studies to clinical data. In addition to an in depth review of translational analysis techniques, we present a number of relevant clinical and animal studies using DTI to investigate developmental insults in order to further illustrate techniques and to highlight where small animal DTI could potentially provide a wealth of translational data to inform clinical researchers. PMID:22627095

  15. Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.

    PubMed

    Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué

    2018-02-15

    We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. CytoSpectre: a tool for spectral analysis of oriented structures on cellular and subcellular levels.

    PubMed

    Kartasalo, Kimmo; Pölönen, Risto-Pekka; Ojala, Marisa; Rasku, Jyrki; Lekkala, Jukka; Aalto-Setälä, Katriina; Kallio, Pasi

    2015-10-26

    Orientation and the degree of isotropy are important in many biological systems such as the sarcomeres of cardiomyocytes and other fibrillar structures of the cytoskeleton. Image based analysis of such structures is often limited to qualitative evaluation by human experts, hampering the throughput, repeatability and reliability of the analyses. Software tools are not readily available for this purpose and the existing methods typically rely at least partly on manual operation. We developed CytoSpectre, an automated tool based on spectral analysis, allowing the quantification of orientation and also size distributions of structures in microscopy images. CytoSpectre utilizes the Fourier transform to estimate the power spectrum of an image and based on the spectrum, computes parameter values describing, among others, the mean orientation, isotropy and size of target structures. The analysis can be further tuned to focus on targets of particular size at cellular or subcellular scales. The software can be operated via a graphical user interface without any programming expertise. We analyzed the performance of CytoSpectre by extensive simulations using artificial images, by benchmarking against FibrilTool and by comparisons with manual measurements performed for real images by a panel of human experts. The software was found to be tolerant against noise and blurring and superior to FibrilTool when analyzing realistic targets with degraded image quality. The analysis of real images indicated general good agreement between computational and manual results while also revealing notable expert-to-expert variation. Moreover, the experiment showed that CytoSpectre can handle images obtained of different cell types using different microscopy techniques. Finally, we studied the effect of mechanical stretching on cardiomyocytes to demonstrate the software in an actual experiment and observed changes in cellular orientation in response to stretching. CytoSpectre, a versatile, easy-to-use software tool for spectral analysis of microscopy images was developed. The tool is compatible with most 2D images and can be used to analyze targets at different scales. We expect the tool to be useful in diverse applications dealing with structures whose orientation and size distributions are of interest. While designed for the biological field, the software could also be useful in non-biological applications.

  17. Spatiotemporal analysis of tumor uptake patterns in dynamic (18)FDG-PET and dynamic contrast enhanced CT.

    PubMed

    Malinen, Eirik; Rødal, Jan; Knudtsen, Ingerid Skjei; Søvik, Åste; Skogmo, Hege Kippenes

    2011-08-01

    Molecular and functional imaging techniques such as dynamic positron emission tomography (DPET) and dynamic contrast enhanced computed tomography (DCECT) may provide improved characterization of tumors compared to conventional anatomic imaging. The purpose of the current work was to compare spatiotemporal uptake patterns in DPET and DCECT images. A PET/CT protocol comprising DCECT with an iodine based contrast agent and DPET with (18)F-fluorodeoxyglucose was set up. The imaging protocol was used for examination of three dogs with spontaneous tumors of the head and neck at sessions prior to and after fractionated radiotherapy. Software tools were developed for downsampling the DCECT image series to the PET image dimensions, for segmentation of tracer uptake pattern in the tumors and for spatiotemporal correlation analysis of DCECT and DPET images. DCECT images evaluated one minute post injection qualitatively resembled the DPET images at most imaging sessions. Segmentation by region growing gave similar tumor extensions in DCECT and DPET images, with a median Dice similarity coefficient of 0.81. A relatively high correlation (median 0.85) was found between temporal tumor uptake patterns from DPET and DCECT. The heterogeneity in tumor uptake was not significantly different in the DPET and DCECT images. The median of the spatial correlation was 0.72. DCECT and DPET gave similar temporal wash-in characteristics, and the images also showed a relatively high spatial correlation. Hence, if the limited spatial resolution of DPET is considered adequate, a single DPET scan only for assessing both tumor perfusion and metabolic activity may be considered. However, further work on a larger number of cases is needed to verify the correlations observed in the present study.

  18. The implementation of CMOS sensors within a real time digital mammography intelligent imaging system: The I-ImaS System

    NASA Astrophysics Data System (ADS)

    Esbrand, C.; Royle, G.; Griffiths, J.; Speller, R.

    2009-07-01

    The integration of technology with healthcare has undoubtedly propelled the medical imaging sector well into the twenty first century. The concept of digital imaging introduced during the 1970s has since paved the way for established imaging techniques where digital mammography, phase contrast imaging and CT imaging are just a few examples. This paper presents a prototype intelligent digital mammography system designed and developed by a European consortium. The final system, the I-ImaS system, utilises CMOS monolithic active pixel sensor (MAPS) technology promoting on-chip data processing, enabling the acts of data processing and image acquisition to be achieved simultaneously; consequently, statistical analysis of tissue is achievable in real-time for the purpose of x-ray beam modulation via a feedback mechanism during the image acquisition procedure. The imager implements a dual array of twenty 520 pixel × 40 pixel CMOS MAPS sensing devices with a 32μm pixel size, each individually coupled to a 100μm thick thallium doped structured CsI scintillator. This paper presents the first intelligent images of real breast tissue obtained from the prototype system of real excised breast tissue where the x-ray exposure was modulated via the statistical information extracted from the breast tissue itself. Conventional images were experimentally acquired where the statistical analysis of the data was done off-line, resulting in the production of simulated real-time intelligently optimised images. The results obtained indicate real-time image optimisation using the statistical information extracted from the breast as a means of a feedback mechanisms is beneficial and foreseeable in the near future.

  19. Initial phantom study comparing image quality in computed tomography using adaptive statistical iterative reconstruction and new adaptive statistical iterative reconstruction v.

    PubMed

    Lim, Kyungjae; Kwon, Heejin; Cho, Jinhan; Oh, Jongyoung; Yoon, Seongkuk; Kang, Myungjin; Ha, Dongho; Lee, Jinhwa; Kang, Eunju

    2015-01-01

    The purpose of this study was to assess the image quality of a novel advanced iterative reconstruction (IR) method called as "adaptive statistical IR V" (ASIR-V) by comparing the image noise, contrast-to-noise ratio (CNR), and spatial resolution from those of filtered back projection (FBP) and adaptive statistical IR (ASIR) on computed tomography (CT) phantom image. We performed CT scans at 5 different tube currents (50, 70, 100, 150, and 200 mA) using 3 types of CT phantoms. Scanned images were subsequently reconstructed in 7 different scan settings, such as FBP, and 3 levels of ASIR and ASIR-V (30%, 50%, and 70%). The image noise was measured in the first study using body phantom. The CNR was measured in the second study using contrast phantom and the spatial resolutions were measured in the third study using a high-resolution phantom. We compared the image noise, CNR, and spatial resolution among the 7 reconstructed image scan settings to determine whether noise reduction, high CNR, and high spatial resolution could be achieved at ASIR-V. At quantitative analysis of the first and second studies, it showed that the images reconstructed using ASIR-V had reduced image noise and improved CNR compared with those of FBP and ASIR (P < 0.001). At qualitative analysis of the third study, it also showed that the images reconstructed using ASIR-V had significantly improved spatial resolution than those of FBP and ASIR (P < 0.001). Our phantom studies showed that ASIR-V provides a significant reduction in image noise and a significant improvement in CNR as well as spatial resolution. Therefore, this technique has the potential to reduce the radiation dose further without compromising image quality.

  20. Image retrieval and processing system version 2.0 development work

    NASA Technical Reports Server (NTRS)

    Slavney, Susan H.; Guinness, Edward A.

    1991-01-01

    The Image Retrieval and Processing System (IRPS) is a software package developed at Washington University and used by the NASA Regional Planetary Image Facilities (RPIF's). The IRPS combines data base management and image processing components to allow the user to examine catalogs of image data, locate the data of interest, and perform radiometric and geometric calibration of the data in preparation for analysis. Version 1.0 of IRPS was completed in Aug. 1989 and was installed at several IRPS's. Other RPIF's use remote logins via NASA Science Internet to access IRPS at Washington University. Work was begun on designing and population a catalog of Magellan image products that will be part of IRPS Version 2.0, planned for release by the end of calendar year 1991. With this catalog, a user will be able to search by orbit and by location for Magellan Basic Image Data Records (BIDR's), Mosaicked Image Data Records (MIDR's), and Altimetry-Radiometry Composite Data Records (ARCDR's). The catalog will include the Magellan CD-ROM volume, director, and file name for each data product. The image processing component of IRPS is based on the Planetary Image Cartography Software (PICS) developed by the U.S. Geological Survey, Flagstaff, Arizona. To augment PICS capabilities, a set of image processing programs were developed that are compatible with PICS-format images. This software includes general-purpose functions that PICS does not have, analysis and utility programs for specific data sets, and programs from other sources that were modified to work with PICS images. Some of the software will be integrated into the Version 2.0 release of IRPS. A table is presented that lists the programs with a brief functional description of each.

  1. A Method to Recognize Anatomical Site and Image Acquisition View in X-ray Images.

    PubMed

    Chang, Xiao; Mazur, Thomas; Li, H Harold; Yang, Deshan

    2017-12-01

    A method was developed to recognize anatomical site and image acquisition view automatically in 2D X-ray images that are used in image-guided radiation therapy. The purpose is to enable site and view dependent automation and optimization in the image processing tasks including 2D-2D image registration, 2D image contrast enhancement, and independent treatment site confirmation. The X-ray images for 180 patients of six disease sites (the brain, head-neck, breast, lung, abdomen, and pelvis) were included in this study with 30 patients each site and two images of orthogonal views each patient. A hierarchical multiclass recognition model was developed to recognize general site first and then specific site. Each node of the hierarchical model recognized the images using a feature extraction step based on principal component analysis followed by a binary classification step based on support vector machine. Given two images in known orthogonal views, the site recognition model achieved a 99% average F1 score across the six sites. If the views were unknown in the images, the average F1 score was 97%. If only one image was taken either with or without view information, the average F1 score was 94%. The accuracy of the site-specific view recognition models was 100%.

  2. Linearity, Bias, and Precision of Hepatic Proton Density Fat Fraction Measurements by Using MR Imaging: A Meta-Analysis.

    PubMed

    Yokoo, Takeshi; Serai, Suraj D; Pirasteh, Ali; Bashir, Mustafa R; Hamilton, Gavin; Hernando, Diego; Hu, Houchun H; Hetterich, Holger; Kühn, Jens-Peter; Kukuk, Guido M; Loomba, Rohit; Middleton, Michael S; Obuchowski, Nancy A; Song, Ji Soo; Tang, An; Wu, Xinhuai; Reeder, Scott B; Sirlin, Claude B

    2018-02-01

    Purpose To determine the linearity, bias, and precision of hepatic proton density fat fraction (PDFF) measurements by using magnetic resonance (MR) imaging across different field strengths, imager manufacturers, and reconstruction methods. Materials and Methods This meta-analysis was performed in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A systematic literature search identified studies that evaluated the linearity and/or bias of hepatic PDFF measurements by using MR imaging (hereafter, MR imaging-PDFF) against PDFF measurements by using colocalized MR spectroscopy (hereafter, MR spectroscopy-PDFF) or the precision of MR imaging-PDFF. The quality of each study was evaluated by using the Quality Assessment of Studies of Diagnostic Accuracy 2 tool. De-identified original data sets from the selected studies were pooled. Linearity was evaluated by using linear regression between MR imaging-PDFF and MR spectroscopy-PDFF measurements. Bias, defined as the mean difference between MR imaging-PDFF and MR spectroscopy-PDFF measurements, was evaluated by using Bland-Altman analysis. Precision, defined as the agreement between repeated MR imaging-PDFF measurements, was evaluated by using a linear mixed-effects model, with field strength, imager manufacturer, reconstruction method, and region of interest as random effects. Results Twenty-three studies (1679 participants) were selected for linearity and bias analyses and 11 studies (425 participants) were selected for precision analyses. MR imaging-PDFF was linear with MR spectroscopy-PDFF (R 2 = 0.96). Regression slope (0.97; P < .001) and mean Bland-Altman bias (-0.13%; 95% limits of agreement: -3.95%, 3.40%) indicated minimal underestimation by using MR imaging-PDFF. MR imaging-PDFF was precise at the region-of-interest level, with repeatability and reproducibility coefficients of 2.99% and 4.12%, respectively. Field strength, imager manufacturer, and reconstruction method each had minimal effects on reproducibility. Conclusion MR imaging-PDFF has excellent linearity, bias, and precision across different field strengths, imager manufacturers, and reconstruction methods. © RSNA, 2017 Online supplemental material is available for this article. An earlier incorrect version of this article appeared online. This article was corrected on October 2, 2017.

  3. Application of deep learning to the classification of images from colposcopy.

    PubMed

    Sato, Masakazu; Horie, Koji; Hara, Aki; Miyamoto, Yuichiro; Kurihara, Kazuko; Tomio, Kensuke; Yokota, Harushige

    2018-03-01

    The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images.

  4. Application of deep learning to the classification of images from colposcopy

    PubMed Central

    Sato, Masakazu; Horie, Koji; Hara, Aki; Miyamoto, Yuichiro; Kurihara, Kazuko; Tomio, Kensuke; Yokota, Harushige

    2018-01-01

    The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images. PMID:29456725

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dias, M F; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center Massachusetts General Hospital; Seco, J

    Purpose: Research in carbon imaging has been growing over the past years, as a way to increase treatment accuracy and patient positioning in carbon therapy. The purpose of this tool is to allow a fast and flexible way to generate CDRR data without the need to use Monte Carlo (MC) simulations. It can also be used to predict future clinically measured data. Methods: A python interface has been developed, which uses information from CT or 4DCT and thetreatment calibration curve to compute the Water Equivalent Path Length (WEPL) of carbon ions. A GPU based ray tracing algorithm computes the WEPLmore » of each individual carbon traveling through the CT voxels. A multiple peak detection method to estimate high contrast margin positioning has been implemented (described elsewhere). MC simulations have been used to simulate carbons depth dose curves in order to simulate the response of a range detector. Results: The tool allows the upload of CT or 4DCT images. The user has the possibility to selectphase/slice of interested as well as position, angle…). The WEPL is represented as a range detector which can be used to assess range dilution and multiple peak detection effects. The tool also provides knowledge of the minimum energy that should be considered for imaging purposes. The multiple peak detection method has been used in a lung tumor case, showing an accuracy of 1mm in determine the exact interface position. Conclusion: The tool offers an easy and fast way to simulate carbon imaging data. It can be used for educational and for clinical purposes, allowing the user to test beam energies and angles before real acquisition. An analysis add-on is being developed, where the used will have the opportunity to select different reconstruction methods and detector types (range or energy). Fundacao para a Ciencia e a Tecnologia (FCT), PhD Grant number SFRH/BD/85749/2012.« less

  6. Earth resources shuttle imaging radar. [systems analysis and design analysis of pulse radar for earth resources information system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A report is presented on a preliminary design of a Synthetic Array Radar (SAR) intended for experimental use with the space shuttle program. The radar is called Earth Resources Shuttle Imaging Radar (ERSIR). Its primary purpose is to determine the usefulness of SAR in monitoring and managing earth resources. The design of the ERSIR, along with tradeoffs made during its evolution is discussed. The ERSIR consists of a flight sensor for collecting the raw radar data and a ground sensor used both for reducing these radar data to images and for extracting earth resources information from the data. The flight sensor consists of two high powered coherent, pulse radars, one that operates at L and the other at X-band. Radar data, recorded on tape can be either transmitted via a digital data link to a ground terminal or the tape can be delivered to the ground station after the shuttle lands. A description of data processing equipment and display devices is given.

  7. Local curvature analysis for classifying breast tumors: Preliminary analysis in dedicated breast CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Juhun, E-mail: leej15@upmc.edu; Nishikawa, Robert M.; Reiser, Ingrid

    2015-09-15

    Purpose: The purpose of this study is to measure the effectiveness of local curvature measures as novel image features for classifying breast tumors. Methods: A total of 119 breast lesions from 104 noncontrast dedicated breast computed tomography images of women were used in this study. Volumetric segmentation was done using a seed-based segmentation algorithm and then a triangulated surface was extracted from the resulting segmentation. Total, mean, and Gaussian curvatures were then computed. Normalized curvatures were used as classification features. In addition, traditional image features were also extracted and a forward feature selection scheme was used to select the optimalmore » feature set. Logistic regression was used as a classifier and leave-one-out cross-validation was utilized to evaluate the classification performances of the features. The area under the receiver operating characteristic curve (AUC, area under curve) was used as a figure of merit. Results: Among curvature measures, the normalized total curvature (C{sub T}) showed the best classification performance (AUC of 0.74), while the others showed no classification power individually. Five traditional image features (two shape, two margin, and one texture descriptors) were selected via the feature selection scheme and its resulting classifier achieved an AUC of 0.83. Among those five features, the radial gradient index (RGI), which is a margin descriptor, showed the best classification performance (AUC of 0.73). A classifier combining RGI and C{sub T} yielded an AUC of 0.81, which showed similar performance (i.e., no statistically significant difference) to the classifier with the above five traditional image features. Additional comparisons in AUC values between classifiers using different combinations of traditional image features and C{sub T} were conducted. The results showed that C{sub T} was able to replace the other four image features for the classification task. Conclusions: The normalized curvature measure contains useful information in classifying breast tumors. Using this, one can reduce the number of features in a classifier, which may result in more robust classifiers for different datasets.« less

  8. Local plate/rod descriptors of 3D trabecular bone micro-CT images from medial axis topologic analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peyrin, Francoise; Attali, Dominique; Chappard, Christine

    Purpose: Trabecular bone microarchitecture is made of a complex network of plate and rod structures evolving with age and disease. The purpose of this article is to propose a new 3D local analysis method for the quantitative assessment of parameters related to the geometry of trabecular bone microarchitecture. Methods: The method is based on the topologic classification of the medial axis of the 3D image into branches, rods, and plates. Thanks to the reversibility of the medial axis, the classification is next extended to the whole 3D image. Finally, the percentages of rods and plates as well as their meanmore » thicknesses are calculated. The method was applied both to simulated test images and 3D micro-CT images of human trabecular bone. Results: The classification of simulated phantoms made of plates and rods shows that the maximum error in the quantitative percentages of plate and rods is less than 6% and smaller than with the structure model index (SMI). Micro-CT images of human femoral bone taken in osteoporosis and early or advanced osteoarthritis were analyzed. Despite the large physiological variability, the present method avoids the underestimation of rods observed with other local methods. The relative percentages of rods and plates were not significantly different between osteoarthritis and osteoporotic groups, whereas their absolute percentages were in relation to an increase of rod and plate thicknesses in advanced osteoarthritis with also higher relative and absolute number of nodes. Conclusions: The proposed method is model-independent, robust to surface irregularities, and enables geometrical characterization of not only skeletal structures but entire 3D images. Its application provided more accurate results than the standard SMI on simple simulated phantoms, but the discrepancy observed on the advanced osteoarthritis group raises questions that will require further investigations. The systematic use of such a local method in the characterization of trabecular bone samples could provide new insight in bone microarchitecture changes related to bone diseases or to those induced by drugs or therapy.« less

  9. Evaluation of MRI and cannabinoid type 1 receptor PET templates constructed using DARTEL for spatial normalization of rat brains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kronfeld, Andrea; Müller-Forell, Wibke; Buchholz, Hans-Georg

    Purpose: Image registration is one prerequisite for the analysis of brain regions in magnetic-resonance-imaging (MRI) or positron-emission-tomography (PET) studies. Diffeomorphic anatomical registration through exponentiated Lie algebra (DARTEL) is a nonlinear, diffeomorphic algorithm for image registration and construction of image templates. The goal of this small animal study was (1) the evaluation of a MRI and calculation of several cannabinoid type 1 (CB1) receptor PET templates constructed using DARTEL and (2) the analysis of the image registration accuracy of MR and PET images to their DARTEL templates with reference to analytical and iterative PET reconstruction algorithms. Methods: Five male Sprague Dawleymore » rats were investigated for template construction using MRI and [{sup 18}F]MK-9470 PET for CB1 receptor representation. PET images were reconstructed using the algorithms filtered back-projection, ordered subset expectation maximization in 2D, and maximum a posteriori in 3D. Landmarks were defined on each MR image, and templates were constructed under different settings, i.e., based on different tissue class images [gray matter (GM), white matter (WM), and GM + WM] and regularization forms (“linear elastic energy,” “membrane energy,” and “bending energy”). Registration accuracy for MRI and PET templates was evaluated by means of the distance between landmark coordinates. Results: The best MRI template was constructed based on gray and white matter images and the regularization form linear elastic energy. In this case, most distances between landmark coordinates were <1 mm. Accordingly, MRI-based spatial normalization was most accurate, but results of the PET-based spatial normalization were quite comparable. Conclusions: Image registration using DARTEL provides a standardized and automatic framework for small animal brain data analysis. The authors were able to show that this method works with high reliability and validity. Using DARTEL templates together with nonlinear registration algorithms allows for accurate spatial normalization of combined MRI/PET or PET-only studies.« less

  10. Temporal Processing of Dynamic Positron Emission Tomography via Principal Component Analysis in the Sinogram Domain

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Parker, B. J.; Feng, D. D.; Fulton, R.

    2004-10-01

    In this paper, we compare various temporal analysis schemes applied to dynamic PET for improved quantification, image quality and temporal compression purposes. We compare an optimal sampling schedule (OSS) design, principal component analysis (PCA) applied in the image domain, and principal component analysis applied in the sinogram domain; for region-of-interest quantification, sinogram-domain PCA is combined with the Huesman algorithm to quantify from the sinograms directly without requiring reconstruction of all PCA channels. Using a simulated phantom FDG brain study and three clinical studies, we evaluate the fidelity of the compressed data for estimation of local cerebral metabolic rate of glucose by a four-compartment model. Our results show that using a noise-normalized PCA in the sinogram domain gives similar compression ratio and quantitative accuracy to OSS, but with substantially better precision. These results indicate that sinogram-domain PCA for dynamic PET can be a useful preprocessing stage for PET compression and quantification applications.

  11. WE-D-204-06: An Open Source ImageJ CatPhan Analysis Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, G

    2015-06-15

    Purpose: The CatPhan is a popular QA device for assessing CT image quality. There are a number of software options which perform analysis of the CatPhan. However, there is often little ability for the user to adjust the analysis if it isn’t running properly, and these are all expensive options. An open source tool is an effective solution. Methods: To use the software, the user imports the CT as an image sequence in ImageJ. The user then scrolls to the slice with the lateral dots. The user then runs the plugin. If tolerance constraints are not already created, the usermore » is prompted to enter them or to use generic tolerances. Upon completion of the analysis, the plugin calls pdfLaTex to compile the pdf report. There is a csv version of the report as well. A log of the results from all CatPhan scans is kept as a csv file. The user can use this to baseline the machine. Results: The tool is capable of detecting the orientation of the phantom. If the CatPhan was scanned backwards, one can simply flip the stack of images horizontally and proceed with the analysis. The analysis includes Sensitometry (estimating the effective beam energy), HU values and linearity, Low Contrast Visibility (using LDPE & Polystyrene), Contrast Scale, Geometric Accuracy, Slice Thickness Accuracy, Spatial resolution (giving the MTF using the line pairs as well as the point spread function), CNR, Low Contrast Detectability (including the raw data), Uniformity (including the Cupping Effect). Conclusion: This is a robust tool that analyzes more components of the CatPhan than other software options (with the exception of ImageOwl). It produces an elegant pdf and keeps a log of analyses for long-term tracking of the system. Because it is open source, users are able to customize any component of it.« less

  12. On the dosimetric effect and reduction of inverse consistency and transitivity errors in deformable image registration for dose accumulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Edward T.; Hardcastle, Nicholas; Tome, Wolfgang A.

    2012-01-15

    Purpose: Deformable image registration (DIR) is necessary for accurate dose accumulation between multiple radiotherapy image sets. DIR algorithms can suffer from inverse and transitivity inconsistencies. When using deformation vector fields (DVFs) that exhibit inverse-inconsistency and are nontransitive, dose accumulation on a given image set via different image pathways will lead to different accumulated doses. The purpose of this study was to investigate the dosimetric effect of and propose a postprocessing solution to reduce inverse consistency and transitivity errors. Methods: Four MVCT images and four phases of a lung 4DCT, each with an associated calculated dose, were selected for analysis. DVFsmore » between all four images in each data set were created using the Fast Symmetric Demons algorithm. Dose was accumulated on the fourth image in each set using DIR via two different image pathways. The two accumulated doses on the fourth image were compared. The inverse consistency and transitivity errors in the DVFs were then reduced. The dose accumulation was repeated using the processed DVFs, the results of which were compared with the accumulated dose from the original DVFs. To evaluate the influence of the postprocessing technique on DVF accuracy, the original and processed DVF accuracy was evaluated on the lung 4DCT data on which anatomical landmarks had been identified by an expert. Results: Dose accumulation to the same image via different image pathways resulted in two different accumulated dose results. After the inverse consistency errors were reduced, the difference between the accumulated doses diminished. The difference was further reduced after reducing the transitivity errors. The postprocessing technique had minimal effect on the accuracy of the DVF for the lung 4DCT images. Conclusions: This study shows that inverse consistency and transitivity errors in DIR have a significant dosimetric effect in dose accumulation; Depending on the image pathway taken to accumulate the dose, different results may be obtained. A postprocessing technique that reduces inverse consistency and transitivity error is presented, which allows for consistent dose accumulation regardless of the image pathway followed.« less

  13. Experimental Design and Data Analysis in Receiver Operating Characteristic Studies: Lessons Learned from Reports in Radiology from 1997 to 20061

    PubMed Central

    Shiraishi, Junji; Pesce, Lorenzo L.; Metz, Charles E.; Doi, Kunio

    2009-01-01

    Purpose: To provide a broad perspective concerning the recent use of receiver operating characteristic (ROC) analysis in medical imaging by reviewing ROC studies published in Radiology between 1997 and 2006 for experimental design, imaging modality, medical condition, and ROC paradigm. Materials and Methods: Two hundred ninety-five studies were obtained by conducting a literature search with PubMed with two criteria: publication in Radiology between 1997 and 2006 and occurrence of the phrase “receiver operating characteristic.” Studies returned by the query that were not diagnostic imaging procedure performance evaluations were excluded. Characteristics of the remaining studies were tabulated. Results: Two hundred thirty-three (79.0%) of the 295 studies reported findings based on observers' diagnostic judgments or objective measurements. Forty-three (14.6%) did not include human observers, with most of these reporting an evaluation of a computer-aided diagnosis system or functional data obtained with computed tomography (CT) or magnetic resonance (MR) imaging. The remaining 19 (6.4%) studies were classified as reviews or meta-analyses and were excluded from our subsequent analysis. Among the various imaging modalities, MR imaging (46.0%) and CT (25.7%) were investigated most frequently. Approximately 60% (144 of 233) of ROC studies with human observers published in Radiology included three or fewer observers. Conclusion: ROC analysis is widely used in radiologic research, confirming its fundamental role in assessing diagnostic performance. However, the ROC studies reported in Radiology were not always adequate to support clear and clinically relevant conclusions. © RSNA, 2009 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.2533081632/-/DC1 PMID:19864510

  14. Tight-frame based iterative image reconstruction for spectral breast CT

    PubMed Central

    Zhao, Bo; Gao, Hao; Ding, Huanjun; Molloi, Sabee

    2013-01-01

    Purpose: To investigate tight-frame based iterative reconstruction (TFIR) technique for spectral breast computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The experimental data were acquired with a fan-beam breast CT system based on a cadmium zinc telluride photon-counting detector. The images were reconstructed with a varying number of projections using the TFIR and filtered backprojection (FBP) techniques. The image quality between these two techniques was evaluated. The image's spatial resolution was evaluated using a high-resolution phantom, and the contrast to noise ratio (CNR) was evaluated using a postmortem breast sample. The postmortem breast samples were decomposed into water, lipid, and protein contents based on images reconstructed from TFIR with 204 projections and FBP with 614 projections. The volumetric fractions of water, lipid, and protein from the image-based measurements in both TFIR and FBP were compared to the chemical analysis. Results: The spatial resolution and CNR were comparable for the images reconstructed by TFIR with 204 projections and FBP with 614 projections. Both reconstruction techniques provided accurate quantification of water, lipid, and protein composition of the breast tissue when compared with data from the reference standard chemical analysis. Conclusions: Accurate breast tissue decomposition can be done with three fold fewer projection images by the TFIR technique without any reduction in image spatial resolution and CNR. This can result in a two-third reduction of the patient dose in a multislit and multislice spiral CT system in addition to the reduced scanning time in this system. PMID:23464320

  15. Novel region of interest interrogation technique for diffusion tensor imaging analysis in the canine brain.

    PubMed

    Li, Jonathan Y; Middleton, Dana M; Chen, Steven; White, Leonard; Ellinwood, N Matthew; Dickson, Patricia; Vite, Charles; Bradbury, Allison; Provenzale, James M

    2017-08-01

    Purpose We describe a novel technique for measuring diffusion tensor imaging metrics in the canine brain. We hypothesized that a standard method for region of interest placement could be developed that is highly reproducible, with less than 10% difference in measurements between raters. Methods Two sets of canine brains (three seven-week-old full-brains and two 17-week-old single hemispheres) were scanned ex-vivo on a 7T small-animal magnetic resonance imaging system. Strict region of interest placement criteria were developed and then used by two raters to independently measure diffusion tensor imaging metrics within four different white-matter regions within each specimen. Average values of fractional anisotropy, radial diffusivity, and the three eigenvalues (λ1, λ2, and λ3) within each region in each specimen overall and within each individual image slice were compared between raters by calculating the percentage difference between raters for each metric. Results The mean percentage difference between raters for all diffusion tensor imaging metrics when pooled by each region and specimen was 1.44% (range: 0.01-5.17%). The mean percentage difference between raters for all diffusion tensor imaging metrics when compared by individual image slice was 2.23% (range: 0.75-4.58%) per hemisphere. Conclusion Our results indicate that the technique described is highly reproducible, even when applied to canine specimens of differing age, morphology, and image resolution. We propose this technique for future studies of diffusion tensor imaging analysis in canine brains and for cross-sectional and longitudinal studies of canine brain models of human central nervous system disease.

  16. Percutaneous Thermal Ablation with Ultrasound Guidance. Fusion Imaging Guidance to Improve Conspicuity of Liver Metastasis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros

    PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time requiredmore » for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.« less

  17. High precision analysis of an embryonic extensional fault-related fold using 3D orthorectified virtual outcrops: The viewpoint importance in structural geology

    NASA Astrophysics Data System (ADS)

    Tavani, Stefano; Corradetti, Amerigo; Billi, Andrea

    2016-05-01

    Image-based 3D modeling has recently opened the way to the use of virtual outcrop models in geology. An intriguing application of this method involves the production of orthorectified images of outcrops using almost any user-defined point of view, so that photorealistic cross-sections suitable for numerous geological purposes and measurements can be easily generated. These purposes include the accurate quantitative analysis of fault-fold relationships starting from imperfectly oriented and partly inaccessible real outcrops. We applied the method of image-based 3D modeling and orthorectification to a case study from the northern Apennines, Italy, where an incipient extensional fault affecting well-layered limestones is exposed on a 10-m-high barely accessible cliff. Through a few simple steps, we constructed a high-quality image-based 3D model of the outcrop. In the model, we made a series of measurements including fault and bedding attitudes, which allowed us to derive the bedding-fault intersection direction. We then used this direction as viewpoint to obtain a distortion-free photorealistic cross-section, on which we measured bed dips and thicknesses as well as fault stratigraphic separations. These measurements allowed us to identify a slight difference (i.e. only 0.5°) between the hangingwall and footwall cutoff angles. We show that the hangingwall strain required to compensate the upward-decreasing displacement of the fault was accommodated by this 0.5° rotation (i.e. folding) and coeval 0.8% thickening of strata in the hangingwall relatively to footwall strata. This evidence is consistent with trishear fault-propagation folding. Our results emphasize the viewpoint importance in structural geology and therefore the potential of using orthorectified virtual outcrops.

  18. Low-cost motility tracking system (LOCOMOTIS) for time-lapse microscopy applications and cell visualisation.

    PubMed

    Lynch, Adam E; Triajianto, Junian; Routledge, Edwin

    2014-01-01

    Direct visualisation of cells for the purpose of studying their motility has typically required expensive microscopy equipment. However, recent advances in digital sensors mean that it is now possible to image cells for a fraction of the price of a standard microscope. Along with low-cost imaging there has also been a large increase in the availability of high quality, open-source analysis programs. In this study we describe the development and performance of an expandable cell motility system employing inexpensive, commercially available digital USB microscopes to image various cell types using time-lapse and perform tracking assays in proof-of-concept experiments. With this system we were able to measure and record three separate assays simultaneously on one personal computer using identical microscopes, and obtained tracking results comparable in quality to those from other studies that used standard, more expensive, equipment. The microscopes used in our system were capable of a maximum magnification of 413.6×. Although resolution was lower than that of a standard inverted microscope we found this difference to be indistinguishable at the magnification chosen for cell tracking experiments (206.8×). In preliminary cell culture experiments using our system, velocities (mean µm/min ± SE) of 0.81 ± 0.01 (Biomphalaria glabrata hemocytes on uncoated plates), 1.17 ± 0.004 (MDA-MB-231 breast cancer cells), 1.24 ± 0.006 (SC5 mouse Sertoli cells) and 2.21 ± 0.01 (B. glabrata hemocytes on Poly-L-Lysine coated plates), were measured and are consistent with previous reports. We believe that this system, coupled with open-source analysis software, demonstrates that higher throughput time-lapse imaging of cells for the purpose of studying motility can be an affordable option for all researchers.

  19. Low-Cost Motility Tracking System (LOCOMOTIS) for Time-Lapse Microscopy Applications and Cell Visualisation

    PubMed Central

    Lynch, Adam E.; Triajianto, Junian; Routledge, Edwin

    2014-01-01

    Direct visualisation of cells for the purpose of studying their motility has typically required expensive microscopy equipment. However, recent advances in digital sensors mean that it is now possible to image cells for a fraction of the price of a standard microscope. Along with low-cost imaging there has also been a large increase in the availability of high quality, open-source analysis programs. In this study we describe the development and performance of an expandable cell motility system employing inexpensive, commercially available digital USB microscopes to image various cell types using time-lapse and perform tracking assays in proof-of-concept experiments. With this system we were able to measure and record three separate assays simultaneously on one personal computer using identical microscopes, and obtained tracking results comparable in quality to those from other studies that used standard, more expensive, equipment. The microscopes used in our system were capable of a maximum magnification of 413.6×. Although resolution was lower than that of a standard inverted microscope we found this difference to be indistinguishable at the magnification chosen for cell tracking experiments (206.8×). In preliminary cell culture experiments using our system, velocities (mean µm/min ± SE) of 0.81±0.01 (Biomphalaria glabrata hemocytes on uncoated plates), 1.17±0.004 (MDA-MB-231 breast cancer cells), 1.24±0.006 (SC5 mouse Sertoli cells) and 2.21±0.01 (B. glabrata hemocytes on Poly-L-Lysine coated plates), were measured and are consistent with previous reports. We believe that this system, coupled with open-source analysis software, demonstrates that higher throughput time-lapse imaging of cells for the purpose of studying motility can be an affordable option for all researchers. PMID:25121722

  20. LIME: 3D visualisation and interpretation of virtual geoscience models

    NASA Astrophysics Data System (ADS)

    Buckley, Simon; Ringdal, Kari; Dolva, Benjamin; Naumann, Nicole; Kurz, Tobias

    2017-04-01

    Three-dimensional and photorealistic acquisition of surface topography, using methods such as laser scanning and photogrammetry, has become widespread across the geosciences over the last decade. With recent innovations in photogrammetric processing software, robust and automated data capture hardware, and novel sensor platforms, including unmanned aerial vehicles, obtaining 3D representations of exposed topography has never been easier. In addition to 3D datasets, fusion of surface geometry with imaging sensors, such as multi/hyperspectral, thermal and ground-based InSAR, and geophysical methods, create novel and highly visual datasets that provide a fundamental spatial framework to address open geoscience research questions. Although data capture and processing routines are becoming well-established and widely reported in the scientific literature, challenges remain related to the analysis, co-visualisation and presentation of 3D photorealistic models, especially for new users (e.g. students and scientists new to geomatics methods). Interpretation and measurement is essential for quantitative analysis of 3D datasets, and qualitative methods are valuable for presentation purposes, for planning and in education. Motivated by this background, the current contribution presents LIME, a lightweight and high performance 3D software for interpreting and co-visualising 3D models and related image data in geoscience applications. The software focuses on novel data integration and visualisation of 3D topography with image sources such as hyperspectral imagery, logs and interpretation panels, geophysical datasets and georeferenced maps and images. High quality visual output can be generated for dissemination purposes, to aid researchers with communication of their research results. The background of the software is described and case studies from outcrop geology, in hyperspectral mineral mapping and geophysical-geospatial data integration are used to showcase the novel methods developed.

  1. An efficient multiple exposure image fusion in JPEG domain

    NASA Astrophysics Data System (ADS)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  2. Photogrammetric Analysis of Historical Image Repositories for Virtual Reconstruction in the Field of Digital Humanities

    NASA Astrophysics Data System (ADS)

    Maiwald, F.; Vietze, T.; Schneider, D.; Henze, F.; Münster, S.; Niebling, F.

    2017-02-01

    Historical photographs contain high density of information and are of great importance as sources in humanities research. In addition to the semantic indexing of historical images based on metadata, it is also possible to reconstruct geometric information about the depicted objects or the camera position at the time of the recording by employing photogrammetric methods. The approach presented here is intended to investigate (semi-) automated photogrammetric reconstruction methods for heterogeneous collections of historical (city) photographs and photographic documentation for the use in the humanities, urban research and history sciences. From a photogrammetric point of view, these images are mostly digitized photographs. For a photogrammetric evaluation, therefore, the characteristics of scanned analog images with mostly unknown camera geometry, missing or minimal object information and low radiometric and geometric resolution have to be considered. In addition, these photographs have not been created specifically for documentation purposes and so the focus of these images is often not on the object to be evaluated. The image repositories must therefore be subjected to a preprocessing analysis of their photogrammetric usability. Investigations are carried out on the basis of a repository containing historical images of the Kronentor ("crown gate") of the Dresden Zwinger. The initial step was to assess the quality and condition of available images determining their appropriateness for generating three-dimensional point clouds from historical photos using a structure-from-motion evaluation (SfM). Then, the generated point clouds were assessed by comparing them with current measurement data of the same object.

  3. Online hyperspectral imaging system for evaluating quality of agricultural products

    NASA Astrophysics Data System (ADS)

    Mo, Changyeun; Kim, Giyoung; Lim, Jongguk

    2017-06-01

    The consumption of fresh-cut agricultural produce in Korea has been growing. The browning of fresh-cut vegetables that occurs during storage and foreign substances such as worms and slugs are some of the main causes of consumers' concerns with respect to safety and hygiene. The purpose of this study is to develop an on-line system for evaluating quality of agricultural products using hyperspectral imaging technology. The online evaluation system with single visible-near infrared hyperspectral camera in the range of 400 nm to 1000 nm that can assess quality of both surfaces of agricultural products such as fresh-cut lettuce was designed. Algorithms to detect browning surface were developed for this system. The optimal wavebands for discriminating between browning and sound lettuce as well as between browning lettuce and the conveyor belt were investigated using the correlation analysis and the one-way analysis of variance method. The imaging algorithms to discriminate the browning lettuces were developed using the optimal wavebands. The ratio image (RI) algorithm of the 533 nm and 697 nm images (RI533/697) for abaxial surface lettuce and the ratio image algorithm (RI533/697) and subtraction image (SI) algorithm (SI538-697) for adaxial surface lettuce had the highest classification accuracies. The classification accuracy of browning and sound lettuce was 100.0% and above 96.0%, respectively, for the both surfaces. The overall results show that the online hyperspectral imaging system could potentially be used to assess quality of agricultural products.

  4. Analysis of a human phenomenon: self-concept.

    PubMed

    LeMone, P

    1991-01-01

    This analysis of self-concept includes an examination of definitions, historical perspectives, theoretical basis, and closely related terms. Antecedents, consequences, defining attributes, and a definition were formulated based on the analysis. The purpose of the analysis was to provide support for the use of the label "self-concept" as a broad category that encompasses the self-esteem, identity, and body-image nursing diagnoses within Taxonomy I. This classification could allow the use of a broad diagnostic label to better describe conditions that necessitate nursing care. It may also further explain the relationships between and among those diagnoses that describe human responses to disturbance of any component of the self-concept.

  5. Review of the current state of whole slide imaging in pathology

    PubMed Central

    Pantanowitz, Liron; Valenstein, Paul N.; Evans, Andrew J.; Kaplan, Keith J.; Pfeifer, John D.; Wilbur, David C.; Collins, Laura C.; Colgan, Terence J.

    2011-01-01

    Whole slide imaging (WSI), or “virtual” microscopy, involves the scanning (digitization) of glass slides to produce “digital slides”. WSI has been advocated for diagnostic, educational and research purposes. When used for remote frozen section diagnosis, WSI requires a thorough implementation period coupled with trained support personnel. Adoption of WSI for rendering pathologic diagnoses on a routine basis has been shown to be successful in only a few “niche” applications. Wider adoption will most likely require full integration with the laboratory information system, continuous automated scanning, high-bandwidth connectivity, massive storage capacity, and more intuitive user interfaces. Nevertheless, WSI has been reported to enhance specific pathology practices, such as scanning slides received in consultation or of legal cases, of slides to be used for patient care conferences, for quality assurance purposes, to retain records of slides to be sent out or destroyed by ancillary testing, and for performing digital image analysis. In addition to technical issues, regulatory and validation requirements related to WSI have yet to be adequately addressed. Although limited validation studies have been published using WSI there are currently no standard guidelines for validating WSI for diagnostic use in the clinical laboratory. This review addresses the current status of WSI in pathology related to regulation and validation, the provision of remote and routine pathologic diagnoses, educational uses, implementation issues, and the cost-benefit analysis of adopting WSI in routine clinical practice. PMID:21886892

  6. Use Of Clinical Decision Analysis In Predicting The Efficacy Of Newer Radiological Imaging Modalities: Radioscintigraphy Versus Single Photon Transverse Section Emission Computed Tomography

    NASA Astrophysics Data System (ADS)

    Prince, John R.

    1982-12-01

    Sensitivity, specificity, and predictive accuracy have been shown to be useful measures of the clinical efficacy of diagnostic tests and can be used to predict the potential improvement in diagnostic certitude resulting from the introduction of a competing technology. This communication demonstrates how the informal use of clinical decision analysis may guide health planners in the allocation of resources, purchasing decisions, and implementation of high technology. For didactic purposes the focus is on a comparison between conventional planar radioscintigraphy (RS) and single photon transverse section emission conputed tomography (SPECT). For example, positive predictive accuracy (PPA) for brain RS in a specialist hospital with a 50% disease prevalance is about 95%. SPECT should increase this predicted accuracy to 96%. In a primary care hospital with only a 15% disease prevalance the PPA is only 77% and SPECT may increase this accuracy to about 79%. Similar calculations based on published data show that marginal improvements are expected with SPECT in the liver. It is concluded that: a) The decision to purchase a high technology imaging modality such as SPECT for clinical purposes should be analyzed on an individual organ system and institutional basis. High technology may be justified in specialist hospitals but not necessarily in primary care hospitals. This is more dependent on disease prevalance than procedure volume; b) It is questionable whether SPECT imaging will be competitive with standard RS procedures. Research should concentrate on the development of different medical applications.

  7. a Region-Based Multi-Scale Approach for Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.

    2016-06-01

    Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  8. TH-AB-209-07: High Resolution X-Ray-Induced Acoustic Computed Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiang, L; Tang, S; Ahmad, M

    Purpose: X-ray radiographic absorption imaging is an invaluable tool in medical diagnostics, biology and materials science. However, the use of conventional CT is limited by two factors: the detection sensitivity to weak absorption material and the radiation dose from CT scanning. The purpose of this study is to explore X-ray induced acoustic computed tomography (XACT), a new imaging modality, which combines X-ray absorption contrast and high ultrasonic resolution to address these challenges. Methods: First, theoretical models was built to analyze the XACT sensitivity to X-ray absorption and calculate the minimal radiation dose in XACT imaging. Then, an XACT system comprisedmore » of an ultrashort X-ray pulse, a low noise ultrasound detector and a signal acquisition system was built to evaluate the X-ray induced acoustic signal generation. A piece of chicken bone and a phantom with two golden fiducial markers were exposed to 270 kVp X-ray source with 60 ns exposure time, and the X-ray induced acoustic signal was received by a 2.25MHz ultrasound transducer in 200 positions. XACT images were reconstructed by a filtered back-projection algorithm. Results: The theoretical analysis shows that X-ray induced acoustic signals have 100% relative sensitivity to X-ray absorption, but not to X-ray scattering. Applying this innovative technology to breast imaging, we can reduce radiation dose by a factor of 50 compared with newly FDA approved breast CT. The reconstructed images of chicken bone and golden fiducial marker phantom reveal that the spatial resolution of the built XACT system is 350µm. Conclusion: In XACT, the imaging sensitivity to X-ray absorption is improved and the imaging dose is dramatically reduced by using ultrashort pulsed X-ray. Taking advantage of the high ultrasonic resolution, we can also perform 3D imaging with a single X-ray pulse. This new modality has the potential to revolutionize x-ray imaging applications in medicine and biology.« less

  9. Estimation of leaf area index using WorldView-2 and Aster satellite image: a case study from Turkey.

    PubMed

    Günlü, Alkan; Keleş, Sedat; Ercanlı, İlker; Şenyurt, Muammer

    2017-10-04

    The objective of this study is to estimate the leaf area index (LAI) of a forest ecosystem using two different satellite images, WorldView-2 and Aster. For this purpose, 108 sample plots were taken from pure Crimean pine forest stands of Yenice Forest Management Planning Unit in Ilgaz Forest Management Enterprise, Turkey. Each sample plot was imaged with hemispherical photographs with a fish-eye camera to determine the LAI. These photographs were analyzed with the help of Hemisfer Hemiview software program, and thus, the LAI of each sample plot was estimated. Furthermore, multiple regression analysis method was used to model the statistical relationships between the LAI values and band spectral reflection values and some vegetation indices (Vis) obtained from satellite images. The results show that the high-resolution WorldView-2 satellite image is better than the medium-resolution Aster satellite image in predicting the LAI. It was also seen that the results obtained by using the VIs are better than the bands when the LAI value is predicted with satellite images.

  10. A joint encryption/watermarking system for verifying the reliability of medical images.

    PubMed

    Bouslimi, Dalel; Coatrieux, Gouenou; Cozic, Michel; Roux, Christian

    2012-09-01

    In this paper, we propose a joint encryption/water-marking system for the purpose of protecting medical images. This system is based on an approach which combines a substitutive watermarking algorithm, the quantization index modulation, with an encryption algorithm: a stream cipher algorithm (e.g., the RC4) or a block cipher algorithm (e.g., the AES in cipher block chaining (CBC) mode of operation). Our objective is to give access to the outcomes of the image integrity and of its origin even though the image is stored encrypted. If watermarking and encryption are conducted jointly at the protection stage, watermark extraction and decryption can be applied independently. The security analysis of our scheme and experimental results achieved on 8-bit depth ultrasound images as well as on 16-bit encoded positron emission tomography images demonstrate the capability of our system to securely make available security attributes in both spatial and encrypted domains while minimizing image distortion. Furthermore, by making use of the AES block cipher in CBC mode, the proposed system is compliant with or transparent to the DICOM standard.

  11. 3D printing from microfocus computed tomography (micro-CT) in human specimens: education and future implications.

    PubMed

    Shelmerdine, Susan C; Simcock, Ian C; Hutchinson, John Ciaran; Aughwane, Rosalind; Melbourne, Andrew; Nikitichev, Daniil I; Ong, Ju-Ling; Borghi, Alessandro; Cole, Garrard; Kingham, Emilia; Calder, Alistair D; Capelli, Claudio; Akhtar, Aadam; Cook, Andrew C; Schievano, Silvia; David, Anna; Ourselin, Sebastian; Sebire, Neil J; Arthurs, Owen J

    2018-06-14

    Microfocus CT (micro-CT) is an imaging method that provides three-dimensional digital data sets with comparable resolution to light microscopy. Although it has traditionally been used for non-destructive testing in engineering, aerospace industries and in preclinical animal studies, new applications are rapidly becoming available in the clinical setting including post-mortem fetal imaging and pathological specimen analysis. Printing three-dimensional models from imaging data sets for educational purposes is well established in the medical literature, but typically using low resolution (0.7 mm voxel size) data acquired from CT or MR examinations. With higher resolution imaging (voxel sizes below 1 micron, <0.001 mm) at micro-CT, smaller structures can be better characterised, and data sets post-processed to create accurate anatomical models for review and handling. In this review, we provide examples of how three-dimensional printing of micro-CT imaged specimens can provide insight into craniofacial surgical applications, developmental cardiac anatomy, placental imaging, archaeological remains and high-resolution bone imaging. We conclude with other potential future usages of this emerging technique.

  12. A large dataset of synthetic SEM images of powder materials and their ground truth 3D structures.

    PubMed

    DeCost, Brian L; Holm, Elizabeth A

    2016-12-01

    This data article presents a data set comprised of 2048 synthetic scanning electron microscope (SEM) images of powder materials and descriptions of the corresponding 3D structures that they represent. These images were created using open source rendering software, and the generating scripts are included with the data set. Eight particle size distributions are represented with 256 independent images from each. The particle size distributions are relatively similar to each other, so that the dataset offers a useful benchmark to assess the fidelity of image analysis techniques. The characteristics of the PSDs and the resulting images are described and analyzed in more detail in the research article "Characterizing powder materials using keypoint-based computer vision methods" (B.L. DeCost, E.A. Holm, 2016) [1]. These data are freely available in a Mendeley Data archive "A large dataset of synthetic SEM images of powder materials and their ground truth 3D structures" (B.L. DeCost, E.A. Holm, 2016) located at http://dx.doi.org/10.17632/tj4syyj9mr.1[2] for any academic, educational, or research purposes.

  13. University Social Responsibility and Brand Image of Private Universities in Bangkok

    ERIC Educational Resources Information Center

    Plungpongpan, Jirawan; Tiangsoongnern, Leela; Speece, Mark

    2016-01-01

    Purpose: The purpose of this paper is to examine the effects of university social responsibility (USR) on the brand image of private universities in Thailand. Brand image is important for entry into the consideration set as prospective students evaluate options for university study. USR activities may be implicit or explicit, i.e., actively…

  14. Investigation of the Secondary School Students' Images of Scientists

    ERIC Educational Resources Information Center

    Akgün, Abuzer

    2016-01-01

    The overall purpose of this study is to explore secondary school students' images of scientists. In addition to this comprehensive purpose, it is also investigated that if these students' current images of scientists and those in which they see themselves as a scientist in the near future are consistent or not. The study was designed in line with…

  15. Imaging Flow Cytometry Analysis to Identify Differences of Survival Motor Neuron Protein Expression in Patients With Spinal Muscular Atrophy.

    PubMed

    Arakawa, Reiko; Arakawa, Masayuki; Kaneko, Kaori; Otsuki, Noriko; Aoki, Ryoko; Saito, Kayoko

    2016-08-01

    Spinal muscular atrophy is a neurodegenerative disorder caused by the deficient expression of survival motor neuron protein in motor neurons. A major goal of disease-modifying therapy is to increase survival motor neuron expression. Changes in survival motor neuron protein expression can be monitored via peripheral blood cells in patients; therefore we tested the sensitivity and utility of imaging flow cytometry for this purpose. After the immortalization of peripheral blood lymphocytes from a human healthy control subject and two patients with spinal muscular atrophy type 1 with two and three copies of SMN2 gene, respectively, we used imaging flow cytometry analysis to identify significant differences in survival motor neuron expression. A bright detail intensity analysis was used to investigate differences in the cellular localization of survival motor neuron protein. Survival motor neuron expression was significantly decreased in cells derived from patients with spinal muscular atrophy relative to those derived from a healthy control subject. Moreover, survival motor neuron expression correlated with the clinical severity of spinal muscular atrophy according to SMN2 copy number. The cellular accumulation of survival motor neuron protein was also significantly decreased in cells derived from patients with spinal muscular atrophy relative to those derived from a healthy control subject. The benefits of imaging flow cytometry for peripheral blood analysis include its capacities for analyzing heterogeneous cell populations; visualizing cell morphology; and evaluating the accumulation, localization, and expression of a target protein. Imaging flow cytometry analysis should be implemented in future studies to optimize its application as a tool for spinal muscular atrophy clinical trials. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Multiplexed immunohistochemistry, imaging, and quantitation: a review, with an assessment of Tyramide signal amplification, multispectral imaging and multiplex analysis.

    PubMed

    Stack, Edward C; Wang, Chichung; Roman, Kristin A; Hoyt, Clifford C

    2014-11-01

    Tissue sections offer the opportunity to understand a patient's condition, to make better prognostic evaluations and to select optimum treatments, as evidenced by the place pathology holds today in clinical practice. Yet, there is a wealth of information locked up in a tissue section that is only partially accessed, due mainly to the limitations of tools and methods. Often tissues are assessed primarily based on visual analysis of one or two proteins, or 2-3 DNA or RNA molecules. Even while analysis is still based on visual perception, image analysis is starting to address the variability of human perception. This is in contrast to measuring characteristics that are substantially out of reach of human perception, such as parameters revealed through co-expression, spatial relationships, heterogeneity, and low abundance molecules. What is not routinely accessed is the information revealed through simultaneous detection of multiple markers, the spatial relationships among cells and tissue in disease, and the heterogeneity now understood to be critical to developing effective therapeutic strategies. Our purpose here is to review and assess methods for multiplexed, quantitative, image analysis based approaches, using new multicolor immunohistochemistry methods, automated multispectral slide imaging, and advanced trainable pattern recognition software. A key aspect of our approach is presenting imagery in a workflow that engages the pathologist to utilize the strengths of human perception and judgment, while significantly expanding the range of metrics collectable from tissue sections and also provide a level of consistency and precision needed to support the complexities of personalized medicine. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  17. A Novel Imaging Analysis Method for Capturing Pharyngeal Constriction During Swallowing.

    PubMed

    Schwertner, Ryan W; Garand, Kendrea L; Pearson, William G

    2016-01-01

    Videofluoroscopic imaging of swallowing known as the Modified Barium Study (MBS) is the standard of care for assessing swallowing difficulty. While the clinical purpose of this radiographic imaging is to primarily assess aspiration risk, valuable biomechanical data is embedded in these studies. Computational analysis of swallowing mechanics (CASM) is an established research methodology for assessing multiple interactions of swallowing mechanics based on coordinates mapping muscle function including hyolaryngeal movement, pharyngeal shortening, tongue base retraction, and extension of the head and neck, however coordinates characterizing pharyngeal constriction is undeveloped. The aim of this study was to establish a method for locating the superior and middle pharyngeal constrictors using hard landmarks as guides on MBS videofluoroscopic imaging, and to test the reliability of this new method. Twenty de-identified, normal, MBS videos were randomly selected from a database. Two raters annotated landmarks for the superior and middle pharyngeal constrictors frame-by-frame using a semi-automated MATLAB tracker tool at two time points. Intraclass correlation coefficients were used to assess test-retest reliability between two raters with an ICC = 0.99 or greater for all coordinates for the retest measurement. MorphoJ integrated software was used to perform a discriminate function analysis to visualize how all 12 coordinates interact with each other in normal swallowing. The addition of the superior and middle pharyngeal constrictor coordinates to CASM allows for a robust analysis of the multiple components of swallowing mechanics interacting with a wide range of variables in both patient specific and cohort studies derived from common use imaging data.

  18. A Novel Imaging Analysis Method for Capturing Pharyngeal Constriction During Swallowing

    PubMed Central

    Schwertner, Ryan W.; Garand, Kendrea L.; Pearson, William G.

    2016-01-01

    Videofluoroscopic imaging of swallowing known as the Modified Barium Study (MBS) is the standard of care for assessing swallowing difficulty. While the clinical purpose of this radiographic imaging is to primarily assess aspiration risk, valuable biomechanical data is embedded in these studies. Computational analysis of swallowing mechanics (CASM) is an established research methodology for assessing multiple interactions of swallowing mechanics based on coordinates mapping muscle function including hyolaryngeal movement, pharyngeal shortening, tongue base retraction, and extension of the head and neck, however coordinates characterizing pharyngeal constriction is undeveloped. The aim of this study was to establish a method for locating the superior and middle pharyngeal constrictors using hard landmarks as guides on MBS videofluoroscopic imaging, and to test the reliability of this new method. Twenty de-identified, normal, MBS videos were randomly selected from a database. Two raters annotated landmarks for the superior and middle pharyngeal constrictors frame-by-frame using a semi-automated MATLAB tracker tool at two time points. Intraclass correlation coefficients were used to assess test-retest reliability between two raters with an ICC = 0.99 or greater for all coordinates for the retest measurement. MorphoJ integrated software was used to perform a discriminate function analysis to visualize how all 12 coordinates interact with each other in normal swallowing. The addition of the superior and middle pharyngeal constrictor coordinates to CASM allows for a robust analysis of the multiple components of swallowing mechanics interacting with a wide range of variables in both patient specific and cohort studies derived from common use imaging data. PMID:28239682

  19. RHSEG and Subdue: Background and Preliminary Approach for Combining these Technologies for Enhanced Image Data Analysis, Mining and Knowledge Discovery

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cook, Diane J.

    2008-01-01

    Under a project recently selected for funding by NASA's Science Mission Directorate under the Applied Information Systems Research (AISR) program, Tilton and Cook will design and implement the integration of the Subdue graph based knowledge discovery system, developed at the University of Texas Arlington and Washington State University, with image segmentation hierarchies produced by the RHSEG software, developed at NASA GSFC, and perform pilot demonstration studies of data analysis, mining and knowledge discovery on NASA data. Subdue represents a method for discovering substructures in structural databases. Subdue is devised for general-purpose automated discovery, concept learning, and hierarchical clustering, with or without domain knowledge. Subdue was developed by Cook and her colleague, Lawrence B. Holder. For Subdue to be effective in finding patterns in imagery data, the data must be abstracted up from the pixel domain. An appropriate abstraction of imagery data is a segmentation hierarchy: a set of several segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. The RHSEG program, a recursive approximation to a Hierarchical Segmentation approach (HSEG), can produce segmentation hierarchies quickly and effectively for a wide variety of images. RHSEG and HSEG were developed at NASA GSFC by Tilton. In this presentation we provide background on the RHSEG and Subdue technologies and present a preliminary analysis on how RHSEG and Subdue may be combined to enhance image data analysis, mining and knowledge discovery.

  20. MultiSpec—a tool for multispectral hyperspectral image data analysis

    NASA Astrophysics Data System (ADS)

    Biehl, Larry; Landgrebe, David

    2002-12-01

    MultiSpec is a multispectral image data analysis software application. It is intended to provide a fast, easy-to-use means for analysis of multispectral image data, such as that from the Landsat, SPOT, MODIS or IKONOS series of Earth observational satellites, hyperspectral data such as that from the Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) and EO-1 Hyperion satellite system or the data that will be produced by the next generation of Earth observational sensors. The primary purpose for the system was to make new, otherwise complex analysis tools available to the general Earth science community. It has also found use in displaying and analyzing many other types of non-space related digital imagery, such as medical image data and in K-12 and university level educational activities. MultiSpec has been implemented for both the Apple Macintosh ® and Microsoft Windows ® operating systems (OS). The effort was first begun on the Macintosh OS in 1988. The GLOBE ( http://www.globe.gov) program supported the development of a subset of MultiSpec for the Windows OS in 1995. Since then most (but not all) of the features in the Macintosh OS version have been ported to the Windows OS version. Although copyrighted, MultiSpec with its documentation is distributed without charge. The Macintosh and Windows versions and documentation on its use are available from the World Wide Web at URL: http://dynamo.ecn.purdue.edu/˜biehl/MultiSpec/ MultiSpec is copyrighted (1991-2001) by Purdue Research Foundation, West Lafayette, Indiana 47907.

  1. Modified dixon‐based renal dynamic contrast‐enhanced MRI facilitates automated registration and perfusion analysis

    PubMed Central

    Leiner, Tim; Vink, Eva E.; Blankestijn, Peter J.; van den Berg, Cornelis A.T.

    2017-01-01

    Purpose Renal dynamic contrast‐enhanced (DCE) MRI provides information on renal perfusion and filtration. However, clinical implementation is hampered by challenges in postprocessing as a result of misalignment of the kidneys due to respiration. We propose to perform automated image registration using the fat‐only images derived from a modified Dixon reconstruction of a dual‐echo acquisition because these provide consistent contrast over the dynamic series. Methods DCE data of 10 hypertensive patients was used. Dual‐echo images were acquired at 1.5 T with temporal resolution of 3.9 s during contrast agent injection. Dixon fat, water, and in‐phase and opposed‐phase (OP) images were reconstructed. Postprocessing was automated. Registration was performed both to fat images and OP images for comparison. Perfusion and filtration values were extracted from a two‐compartment model fit. Results Automatic registration to fat images performed better than automatic registration to OP images with visible contrast enhancement. Median vertical misalignment of the kidneys was 14 mm prior to registration, compared to 3 mm and 5 mm with registration to fat images and OP images, respectively (P = 0.03). Mean perfusion values and MR‐based glomerular filtration rates (GFR) were 233 ± 64 mL/100 mL/min and 60 ± 36 mL/minute, respectively, based on fat‐registered images. MR‐based GFR correlated with creatinine‐based GFR (P = 0.04) for fat‐registered images. For unregistered and OP‐registered images, this correlation was not significant. Conclusion Absence of contrast changes on Dixon fat images improves registration in renal DCE MRI and enables automated postprocessing, resulting in a more accurate estimation of GFR. Magn Reson Med 80:66–76, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. PMID:29134673

  2. ADC histogram analysis of muscle lymphoma - Correlation with histopathology in a rare entity.

    PubMed

    Meyer, Hans-Jonas; Pazaitis, Nikolaos; Surov, Alexey

    2018-06-21

    Diffusion weighted imaging (DWI) is able to reflect histopathology architecture. A novel imaging approach, namely histogram analysis, is used to further characterize lesion on MRI. The purpose of this study is to correlate histogram parameters derived from apparent diffusion coefficient- (ADC) maps with histopathology parameters in muscle lymphoma. Eight patients (mean age 64.8 years, range 45-72 years) with histopathologically confirmed muscle lymphoma were retrospectively identified. Cell count, total nucleic and average nucleic areas were estimated using ImageJ. Additionally, Ki67-index was calculated. DWI was obtained on a 1.5T scanner by using the b values of 0 and 1000 s/mm2. Histogram analysis was performed as a whole lesion measurement by using a custom-made Matlabbased application. The correlation analysis revealed statistically significant correlation between cell count and ADCmean (p=-0.76, P=0.03) as well with ADCp75 (p=-0.79, P=0.02). Kurtosis and entropy correlated with average nucleic area (p=-0.81, P=0.02, p=0.88, P=0.007, respectively). None of the analyzed ADC parameters correlated with total nucleic area and with Ki67-index. This study identified significant correlations between cellularity and histogram parameters derived from ADC maps in muscle lymphoma. Thus, histogram analysis parameters reflect histopathology in muscle tumors. Advances in knowledge: Whole lesion ADC histogram analysis is able to reflect histopathology parameters in muscle lymphomas.

  3. Army technology development. IBIS query. Software to support the Image Based Information System (IBIS) expansion for mapping, charting and geodesy

    NASA Technical Reports Server (NTRS)

    Friedman, S. Z.; Walker, R. E.; Aitken, R. B.

    1986-01-01

    The Image Based Information System (IBIS) has been under development at the Jet Propulsion Laboratory (JPL) since 1975. It is a collection of more than 90 programs that enable processing of image, graphical, tabular data for spatial analysis. IBIS can be utilized to create comprehensive geographic data bases. From these data, an analyst can study various attributes describing characteristics of a given study area. Even complex combinations of disparate data types can be synthesized to obtain a new perspective on spatial phenomena. In 1984, new query software was developed enabling direct Boolean queries of IBIS data bases through the submission of easily understood expressions. An improved syntax methodology, a data dictionary, and display software simplified the analysts' tasks associated with building, executing, and subsequently displaying the results of a query. The primary purpose of this report is to describe the features and capabilities of the new query software. A secondary purpose of this report is to compare this new query software to the query software developed previously (Friedman, 1982). With respect to this topic, the relative merits and drawbacks of both approaches are covered.

  4. Automatic detection of zebra crossings from mobile LiDAR data

    NASA Astrophysics Data System (ADS)

    Riveiro, B.; González-Jorge, H.; Martínez-Sánchez, J.; Díaz-Vilariño, L.; Arias, P.

    2015-07-01

    An algorithm for the automatic detection of zebra crossings from mobile LiDAR data is developed and tested to be applied for road management purposes. The algorithm consists of several subsequent processes starting with road segmentation by performing a curvature analysis for each laser cycle. Then, intensity images are created from the point cloud using rasterization techniques, in order to detect zebra crossing using the Standard Hough Transform and logical constrains. To optimize the results, image processing algorithms are applied to the intensity images from the point cloud. These algorithms include binarization to separate the painting area from the rest of the pavement, median filtering to avoid noisy points, and mathematical morphology to fill the gaps between the pixels in the border of white marks. Once the road marking is detected, its position is calculated. This information is valuable for inventorying purposes of road managers that use Geographic Information Systems. The performance of the algorithm has been evaluated over several mobile LiDAR strips accounting for a total of 30 zebra crossings. That test showed a completeness of 83%. Non-detected marks mainly come from painting deterioration of the zebra crossing or by occlusions in the point cloud produced by other vehicles on the road.

  5. Colour image segmentation using unsupervised clustering technique for acute leukemia images

    NASA Astrophysics Data System (ADS)

    Halim, N. H. Abd; Mashor, M. Y.; Nasir, A. S. Abdul; Mustafa, N.; Hassan, R.

    2015-05-01

    Colour image segmentation has becoming more popular for computer vision due to its important process in most medical analysis tasks. This paper proposes comparison between different colour components of RGB(red, green, blue) and HSI (hue, saturation, intensity) colour models that will be used in order to segment the acute leukemia images. First, partial contrast stretching is applied on leukemia images to increase the visual aspect of the blast cells. Then, an unsupervised moving k-means clustering algorithm is applied on the various colour components of RGB and HSI colour models for the purpose of segmentation of blast cells from the red blood cells and background regions in leukemia image. Different colour components of RGB and HSI colour models have been analyzed in order to identify the colour component that can give the good segmentation performance. The segmented images are then processed using median filter and region growing technique to reduce noise and smooth the images. The results show that segmentation using saturation component of HSI colour model has proven to be the best in segmenting nucleus of the blast cells in acute leukemia image as compared to the other colour components of RGB and HSI colour models.

  6. ROC analysis for diagnostic accuracy of fracture by using different monitors.

    PubMed

    Liang, Zhigang; Li, Kuncheng; Yang, Xiaolin; Du, Xiangying; Liu, Jiabin; Zhao, Xin; Qi, Xiangdong

    2006-09-01

    The purpose of this study was to compare diagnostic accuracy by using two types of monitors. Four radiologists with 10 years experience twice interpreted the films of 77 fracture cases by using the ViewSonic P75f+ and BARCO MGD221 monitors, with a time interval of 3 weeks. Each time the radiologists used one type of monitor to interpret the images. The image browser used was the Unisight software provided by Atlastiger Company (Shanghai, China), and interpretation result was analyzed via the LABMRMC software. In studies of receiver operating characteristics to score the presence or absence of fracture, the results of images interpreted through monochromic monitors showed significant statistical difference compared to those interpreted using the color monitors. A significant difference was observed in the results obtained by using two kinds of monitors. Color monitors cannot serve as substitutes for monochromatic monitors in the process of interpreting computed radiography (CR) images with fractures.

  7. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    PubMed

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  8. Protection performance evaluation regarding imaging sensors hardened against laser dazzling

    NASA Astrophysics Data System (ADS)

    Ritt, Gunnar; Koerber, Michael; Forster, Daniel; Eberle, Bernd

    2015-05-01

    Electro-optical imaging sensors are widely distributed and used for many different purposes, including civil security and military operations. However, laser irradiation can easily disturb their operational capability. Thus, an adequate protection mechanism for electro-optical sensors against dazzling and damaging is highly desirable. Different protection technologies exist now, but none of them satisfies the operational requirements without any constraints. In order to evaluate the performance of various laser protection measures, we present two different approaches based on triangle orientation discrimination on the one hand and structural similarity on the other hand. For both approaches, image analysis algorithms are applied to images taken of a standard test scene with triangular test patterns which is superimposed by dazzling laser light of various irradiance levels. The evaluation methods are applied to three different sensors: a standard complementary metal oxide semiconductor camera, a high dynamic range camera with a nonlinear response curve, and a sensor hardened against laser dazzling.

  9. Segmentation of the pectoral muscle in breast MR images using structure tensor and deformable model

    NASA Astrophysics Data System (ADS)

    Lee, Myungeun; Kim, Jong Hyo

    2012-02-01

    Recently, breast MR images have been used in wider clinical area including diagnosis, treatment planning, and treatment response evaluation, which requests quantitative analysis and breast tissue segmentation. Although several methods have been proposed for segmenting MR images, segmenting out breast tissues robustly from surrounding structures in a wide range of anatomical diversity still remains challenging. Therefore, in this paper, we propose a practical and general-purpose approach for segmenting the pectoral muscle boundary based on the structure tensor and deformable model. The segmentation work flow comprises four key steps: preprocessing, detection of the region of interest (ROI) within the breast region, segmenting the pectoral muscle and finally extracting and refining the pectoral muscle boundary. From experimental results we show that the proposed method can segment the pectoral muscle robustly in diverse patient cases. In addition, the proposed method will allow the application of the quantification research for various breast images.

  10. Preclinical Imaging for the Study of Mouse Models of Thyroid Cancer

    PubMed Central

    Greco, Adelaide; Orlandella, Francesca Maria; Iervolino, Paola Lucia Chiara; Klain, Michele; Salvatore, Giuliana

    2017-01-01

    Thyroid cancer, which represents the most common tumors among endocrine malignancies, comprises a wide range of neoplasms with different clinical aggressiveness. One of the most important challenges in research is to identify mouse models that most closely resemble human pathology; other goals include finding a way to detect markers of disease that common to humans and mice and to identify the most appropriate and least invasive therapeutic strategies for specific tumor types. Preclinical thyroid imaging includes a wide range of techniques that allow for morphological and functional characterization of thyroid disease as well as targeting and in most cases, this imaging allows quantitative analysis of the molecular pattern of the thyroid cancer. The aim of this review paper is to provide an overview of all of the imaging techniques used to date both for diagnosis and theranostic purposes in mouse models of thyroid cancer. PMID:29258188

  11. Feasibility of generating quantitative composition images in dual energy mammography: a simulation study

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon; Kim, Ye-seul; Choi, Sunghoon; Lee, Haenghwa; Choi, Seungyeon; Kim, Hee-Joung

    2016-03-01

    Breast cancer is one of the most common malignancies in women. For years, mammography has been used as the gold standard for localizing breast cancer, despite its limitation in determining cancer composition. Therefore, the purpose of this simulation study is to confirm the feasibility of obtaining tumor composition using dual energy digital mammography. To generate X-ray sources for dual energy mammography, 26 kVp and 39 kVp voltages were generated for low and high energy beams, respectively. Additionally, the energy subtraction and inverse mapping functions were applied to provide compositional images. The resultant images showed that the breast composition obtained by the inverse mapping function with cubic fitting achieved the highest accuracy and least noise. Furthermore, breast density analysis with cubic fitting showed less than 10% error compare to true values. In conclusion, this study demonstrated the feasibility of creating individual compositional images and capability of analyzing breast density effectively.

  12. Representations in plastic surgery: the impact of self-image and self-confidence in the work environment.

    PubMed

    Foustanos, A; Pantazi, L; Zavrides, H

    2007-01-01

    This research was initiated by the authors' conviction that many people currently pay great attention to their personal appearance, which is directly linked to their self-confidence. The external image of individuals appears to have a decisive influence on their behavior and personal choices regarding both their personal and professional lives. Accordingly, it can be assumed that appearance influences professional choices and development. Moreover, individuals associate increased self-confidence with positive social images. Therefore, the main variables used in this study were self-image, self-confidence, and work environment. For the purpose of this study, the authors developed a questionnaire and distributed it to a sample of 100 women who had undergone aesthetic plastic surgery. The aim of the questionnaire was to discover the opinion of these women concerning the aforementioned assumptions. After the data processing and analysis, the authors concluded that the aforementioned variables are statistically significant and correlated.

  13. Automatic analysis and quantification of fluorescently labeled synapses in microscope images

    NASA Astrophysics Data System (ADS)

    Yona, Shai; Katsman, Alex; Orenbuch, Ayelet; Gitler, Daniel; Yitzhaky, Yitzhak

    2011-09-01

    The purpose of this work is to classify and quantify synapses and their properties in the cultures of a mouse's hippocampus, from images acquired by a fluorescent microscope. Quantification features include the number of synapses, their intensity and their size characteristics. The images obtained by the microscope contain hundreds to several thousands of synapses with various elliptic-like shape features and intensities. These images also include other features such as glia cells and other biological objects beyond the focus plane; those features reduce the visibility of the synapses and interrupt the segmentation process. The proposed method comprises several steps, including background subtraction, identification of suspected centers of synapses as local maxima of small neighborhoods, evaluation of the tendency of objects to be synapses according to intensity properties at their larger neighborhoods, classification of detected synapses into categories as bulks or single synapses and finally, delimiting the borders of each synapse.

  14. Design of light guide sleeve on hyperspectral imaging system for skin diagnosis

    NASA Astrophysics Data System (ADS)

    Yan, Yung-Jhe; Chang, Chao-Hsin; Huang, Ting-Wei; Chiang, Hou-Chi; Wu, Jeng-Fu; Ou-Yang, Mang

    2017-08-01

    A hyperspectral imaging system is proposed for early study of skin diagnosis. A stable and high hyperspectral image quality is important for analysis. Therefore, a light guide sleeve (LGS) was designed for the embedded on a hyperspectral imaging system. It provides a uniform light source on the object plane with the determined distance. Furthermore, it can shield the ambient light from entering the system and increasing noise. For the purpose of producing a uniform light source, the LGS device was designed in the symmetrical double-layered structure. It has light cut structures to adjust distribution of rays between two layers and has the Lambertian surface in the front-end to promote output uniformity. In the simulation of the design, the uniformity of illuminance was about 91.7%. In the measurement of the actual light guide sleeve, the uniformity of illuminance was about 92.5%.

  15. Ethical implications of digital images for teaching and learning purposes: an integrative review.

    PubMed

    Kornhaber, Rachel; Betihavas, Vasiliki; Baber, Rodney J

    2015-01-01

    Digital photography has simplified the process of capturing and utilizing medical images. The process of taking high-quality digital photographs has been recognized as efficient, timely, and cost-effective. In particular, the evolution of smartphone and comparable technologies has become a vital component in teaching and learning of health care professionals. However, ethical standards in relation to digital photography for teaching and learning have not always been of the highest standard. The inappropriate utilization of digital images within the health care setting has the capacity to compromise patient confidentiality and increase the risk of litigation. Therefore, the aim of this review was to investigate the literature concerning the ethical implications for health professionals utilizing digital photography for teaching and learning. A literature search was conducted utilizing five electronic databases, PubMed, Embase (Excerpta Medica Database), Cumulative Index to Nursing and Allied Health Literature, Educational Resources Information Center, and Scopus, limited to English language. Studies that endeavored to evaluate the ethical implications of digital photography for teaching and learning purposes in the health care setting were included. The search strategy identified 514 papers of which nine were retrieved for full review. Four papers were excluded based on the inclusion criteria, leaving five papers for final analysis. Three key themes were developed: knowledge deficit, consent and beyond, and standards driving scope of practice. The assimilation of evidence in this review suggests that there is value for health professionals utilizing digital photography for teaching purposes in health education. However, there is limited understanding of the process of obtaining and storage and use of such mediums for teaching purposes. Disparity was also highlighted related to policy and guideline identification and development in clinical practice. Therefore, the implementation of policy to guide practice requires further research.

  16. Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis.

    PubMed Central

    Sieracki, M E; Reichenbach, S E; Webb, K L

    1989-01-01

    The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431

  17. WE-G-207-05: Relationship Between CT Image Quality, Segmentation Performance, and Quantitative Image Feature Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, J; Nishikawa, R; Reiser, I

    Purpose: Segmentation quality can affect quantitative image feature analysis. The objective of this study is to examine the relationship between computed tomography (CT) image quality, segmentation performance, and quantitative image feature analysis. Methods: A total of 90 pathology proven breast lesions in 87 dedicated breast CT images were considered. An iterative image reconstruction (IIR) algorithm was used to obtain CT images with different quality. With different combinations of 4 variables in the algorithm, this study obtained a total of 28 different qualities of CT images. Two imaging tasks/objectives were considered: 1) segmentation and 2) classification of the lesion as benignmore » or malignant. Twenty-three image features were extracted after segmentation using a semi-automated algorithm and 5 of them were selected via a feature selection technique. Logistic regression was trained and tested using leave-one-out-cross-validation and its area under the ROC curve (AUC) was recorded. The standard deviation of a homogeneous portion and the gradient of a parenchymal portion of an example breast were used as an estimate of image noise and sharpness. The DICE coefficient was computed using a radiologist’s drawing on the lesion. Mean DICE and AUC were used as performance metrics for each of the 28 reconstructions. The relationship between segmentation and classification performance under different reconstructions were compared. Distributions (median, 95% confidence interval) of DICE and AUC for each reconstruction were also compared. Results: Moderate correlation (Pearson’s rho = 0.43, p-value = 0.02) between DICE and AUC values was found. However, the variation between DICE and AUC values for each reconstruction increased as the image sharpness increased. There was a combination of IIR parameters that resulted in the best segmentation with the worst classification performance. Conclusion: There are certain images that yield better segmentation or classification performance. The best segmentation Result does not necessarily lead to the best classification Result. This work has been supported in part by grants from the NIH R21-EB015053. R Nishikawa is receives royalties form Hologic, Inc.« less

  18. Image analysis and machine learning in digital pathology: Challenges and opportunities.

    PubMed

    Madabhushi, Anant; Lee, George

    2016-10-01

    With the rise in whole slide scanner technology, large numbers of tissue slides are being scanned and represented and archived digitally. While digital pathology has substantial implications for telepathology, second opinions, and education there are also huge research opportunities in image computing with this new source of "big data". It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine "sub-visual" image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome. However the compelling opportunities in precision medicine offered by big digital pathology data come with their own set of computational challenges. Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales. The purpose of this review is to discuss developments in computational image analysis tools for predictive modeling of digital pathology images from a detection, segmentation, feature extraction, and tissue classification perspective. We discuss the emergence of new handcrafted feature approaches for improved predictive modeling of tissue appearance and also review the emergence of deep learning schemes for both object detection and tissue classification. We also briefly review some of the state of the art in fusion of radiology and pathology images and also combining digital pathology derived image measurements with molecular "omics" features for better predictive modeling. The review ends with a brief discussion of some of the technical and computational challenges to be overcome and reflects on future opportunities for the quantitation of histopathology. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    PubMed

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article.

  20. The Relationship between Neurite Density Measured with Confocal Microscopy in a Cleared Mouse Brain and Metrics Obtained from Diffusion Tensor and Diffusion Kurtosis Imaging

    PubMed Central

    Irie, Ryusuke; Kamagata, Koji; Kerever, Aurelien; Ueda, Ryo; Yokosawa, Suguru; Otake, Yosuke; Ochi, Hisaaki; Yoshizawa, Hidekazu; Hayashi, Ayato; Tagawa, Kazuhiko; Okazawa, Hitoshi; Takahashi, Kohske; Sato, Kanako; Hori, Masaaki; Arikawa-Hirasawa, Eri; Aoki, Shigeki

    2018-01-01

    Purpose: Diffusional kurtosis imaging (DKI) enables sensitive measurement of tissue microstructure by quantifying the non-Gaussian diffusion of water. Although DKI is widely applied in many situations, histological correlation with DKI analysis is lacking. The purpose of this study was to determine the relationship between DKI metrics and neurite density measured using confocal microscopy of a cleared mouse brain. Methods: One thy-1 yellow fluorescent protein 16 mouse was deeply anesthetized and perfusion fixation was performed. The brain was carefully dissected out and whole-brain MRI was performed using a 7T animal MRI system. DKI and diffusion tensor imaging (DTI) data were obtained. After the MRI scan, brain sections were prepared and then cleared using aminoalcohols (CUBIC). Confocal microscopy was performed using a two-photon confocal microscope with a laser. Forty-eight ROIs were set on the caudate putamen, seven ROIs on the anterior commissure, and seven ROIs on the ventral hippocampal commissure on the confocal microscopic image and a corresponding MR image. In each ROI, histological neurite density and the metrics of DKI and DTI were calculated. The correlations between diffusion metrics and neurite density were analyzed using Pearson correlation coefficient analysis. Results: Mean kurtosis (MK) (P = 5.2 × 10−9, r = 0.73) and radial kurtosis (P = 2.3 × 10−9, r = 0.74) strongly correlated with neurite density in the caudate putamen. The correlation between fractional anisotropy (FA) and neurite density was moderate (P = 0.0030, r = 0.42). In the anterior commissure and the ventral hippocampal commissure, neurite density and FA are very strongly correlated (P = 1.3 × 10−5, r = 0.90). MK in these areas were very high value and showed no significant correlation (P = 0.48). Conclusion: DKI accurately reflected neurite density in the area with crossing fibers, potentially allowing evaluation of complex microstructures. PMID:29213008

  1. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006

    PubMed Central

    Chen, Lin; Ray, Shonket; Keller, Brad M.; Pertuz, Said; McDonald, Elizabeth S.; Conant, Emily F.

    2016-01-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88–0.95; weighted κ = 0.83–0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76–0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. © RSNA, 2016 Online supplemental material is available for this article. PMID:27002418

  2. Analysis of LANDSAT-4 TM Data for Lithologic and Image Mapping Purpose

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.; Salisbury, J. W.; Bender, L. V.; Jones, O. D.; Mimms, D. L.

    1984-01-01

    Lithologic mapping techniques using the near infrared bands of the Thematic Mapper onboard the LANDSAT 4 satellite are investigated. These methods are coupled with digital masking to test the capability of mapping geologic materials. Data are examined under medium to low Sun angle illumination conditions to determine the detection limits of materials with absorption features. Several detection anomalies are observed and explained.

  3. What Images Show That Words Do Not: Analysis of Pre-Service Teachers' Depictions of Effective Agricultural Education Teachers in the 21st Century

    ERIC Educational Resources Information Center

    Robinson, J. Shane; Kelsey, Kathleen D.; Terry, Robert, Jr.

    2013-01-01

    One of the intended outcomes of agricultural teacher education programs is the progressive development and refinement of students' professional identity. The purpose of this study was to determine the extent to which pre-service agriculture teachers' mental models, depicting the roles and responsibilities of school-based agriculture teachers,…

  4. Improving depiction of temporal bone anatomy with low-radiation dose CT by an integrated circuit detector in pediatric patients: a preliminary study.

    PubMed

    He, Jingzhen; Zu, Yuliang; Wang, Qing; Ma, Xiangxing

    2014-12-01

    The purpose of this study was to determine the performance of low-dose computed tomography (CT) scanning with integrated circuit (IC) detector in defining fine structures of temporal bone in children by comparing with the conventional detector. The study was performed with the approval of our institutional review board and the patients' anonymity was maintained. A total of 86 children<3 years of age underwent imaging of temporal bone with low-dose CT (80 kV/150 mAs) equipped with either IC detector or conventional discrete circuit (DC) detector. The image noise was measured for quantitative analysis. Thirty-five structures of temporal bone were further assessed and rated by 2 radiologists for qualitative analysis. κ Statistics were performed to determine the agreement reached between the 2 radiologists on each image. Mann-Whitney U test was used to determine the difference in image quality between the 2 detector systems. Objective analysis showed that the image noise was significantly lower (P<0.001) with the IC detector than with the DC detector. The κ values for qualitative assessment of the 35 fine anatomical structures revealed high interobserver agreement. The delineation for 30 of the 35 landmarks (86%) with the IC detector was superior to that with the conventional DC detector (P<0.05) although there were no differences in the delineation of the remaining 5 structures (P>0.05). The low-dose CT images acquired with the IC detector provide better depiction of fine osseous structures of temporal bone than that with the conventional DC detector.

  5. Performance assessment of methods for estimation of fractal dimension from scanning electron microscope images.

    PubMed

    Risović, Dubravko; Pavlović, Zivko

    2013-01-01

    Processing of gray scale images in order to determine the corresponding fractal dimension is very important due to widespread use of imaging technologies and application of fractal analysis in many areas of science, technology, and medicine. To this end, many methods for estimation of fractal dimension from gray scale images have been developed and routinely used. Unfortunately different methods (dimension estimators) often yield significantly different results in a manner that makes interpretation difficult. Here, we report results of comparative assessment of performance of several most frequently used algorithms/methods for estimation of fractal dimension. To that purpose, we have used scanning electron microscope images of aluminum oxide surfaces with different fractal dimensions. The performance of algorithms/methods was evaluated using the statistical Z-score approach. The differences between performances of six various methods are discussed and further compared with results obtained by electrochemical impedance spectroscopy on the same samples. The analysis of results shows that the performance of investigated algorithms varies considerably and that systematically erroneous fractal dimensions could be estimated using certain methods. The differential cube counting, triangulation, and box counting algorithms showed satisfactory performance in the whole investigated range of fractal dimensions. Difference statistic is proved to be less reliable generating 4% of unsatisfactory results. The performances of the Power spectrum, Partitioning and EIS were unsatisfactory in 29%, 38%, and 75% of estimations, respectively. The results of this study should be useful and provide guidelines to researchers using/attempting fractal analysis of images obtained by scanning microscopy or atomic force microscopy. © Wiley Periodicals, Inc.

  6. Parallel Wavefront Analysis for a 4D Interferometer

    NASA Technical Reports Server (NTRS)

    Rao, Shanti R.

    2011-01-01

    This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.

  7. Spatial and temporal changes in household structure locations using high-resolution satellite imagery for population assessment: an analysis in southern Zambia, 2006-2011.

    PubMed

    Shields, Timothy; Pinchoff, Jessie; Lubinda, Jailos; Hamapumbu, Harry; Searle, Kelly; Kobayashi, Tamaki; Thuma, Philip E; Moss, William J; Curriero, Frank C

    2016-05-31

    Satellite imagery is increasingly available at high spatial resolution and can be used for various purposes in public health research and programme implementation. Comparing a census generated from two satellite images of the same region in rural southern Zambia obtained four and a half years apart identified patterns of household locations and change over time. The length of time that a satellite image-based census is accurate determines its utility. Households were enumerated manually from satellite images obtained in 2006 and 2011 of the same area. Spatial statistics were used to describe clustering, cluster detection, and spatial variation in the location of households. A total of 3821 household locations were enumerated in 2006 and 4256 in 2011, a net change of 435 houses (11.4% increase). Comparison of the images indicated that 971 (25.4%) structures were added and 536 (14.0%) removed. Further analysis suggested similar household clustering in the two images and no substantial difference in concentration of households across the study area. Cluster detection analysis identified a small area where significantly more household structures were removed than expected; however, the amount of change was of limited practical significance. These findings suggest that random sampling of households for study participation would not induce geographic bias if based on a 4.5-year-old image in this region. Application of spatial statistical methods provides insights into the population distribution changes between two time periods and can be helpful in assessing the accuracy of satellite imagery.

  8. Comparison between non-invasive methods used on paintings by Goya and his contemporaries: hyperspectral imaging vs. point-by-point spectroscopic analysis.

    PubMed

    Daniel, Floréal; Mounier, Aurélie; Pérez-Arantegui, Josefina; Pardos, Carlos; Prieto-Taboada, Nagore; Fdez-Ortiz de Vallejuelo, Silvia; Castro, Kepa

    2017-06-01

    The development of non-invasive techniques for the characterization of pigments is crucial in order to preserve the integrity of the artwork. In this sense, the usefulness of hyperspectral imaging was demonstrated. It allows pigment characterization of the whole painting. However, it also sometimes requires the complementation of other point-by-point techniques. In the present article, the advantages of hyperspectral imaging over point-by-point spectroscopic analysis were evaluated. For that purpose, three paintings were analysed by hyperspectral imaging, handheld X-ray fluorescence and handheld Raman spectroscopy in order to determine the best non-invasive technique for pigment identifications. Thanks to this work, the main pigments used in Aragonese artworks, and especially in Goya's paintings, were identified and mapped by imaging reflection spectroscopy. All the analysed pigments corresponded to those used at the time of Goya. Regarding the techniques used, the information obtained by the hyperspectral imaging and point-by-point analysis has been, in general, different and complementary. Given this fact, selecting only one technique is not recommended, and the present work demonstrates the usefulness of the combination of all the techniques used as the best non-invasive methodology for the pigments' characterization. Moreover, the proposed methodology is a relatively quick procedure that allows a larger number of Goya's paintings in the museum to be surveyed, increasing the possibility of obtaining significant results and providing a chance for extensive comparisons, which are relevant from the point of view of art history issues.

  9. Thermostructural Analysis of the SOFIA Fine Field and Wide Field Imagers Subjected to Convective Thermal Shock

    NASA Technical Reports Server (NTRS)

    Kostyk, Christopher B.

    2012-01-01

    The Stratospheric Observatory For Infrared Astronomy (SOFIA) is a highly modified Boeing 747-SP with a 17- ton infrared telescope installed in the aft portion of the aircraft. Unlike ground- and space-based platforms, SOFIA can deploy to make observations anytime, anywhere, in the world. The originally designed aircraft configuration included a ground pre-cool system, however, due to various factors in the history of the project, that system was not installed. This lack of ground pre-cooling was the source of the concern about whether or not the imagers would be exposed to a potentially unsafe thermostructural environment. This concern was in addition to the already-existing concern of some project members that the air temperature rate of change during flight (both at the same altitude as well as ascent or descent) could cause the imagers to be exposed to an unsafe thermostructural environment. Four optical components were identified as the components of concern: two of higher concern (one in each imager), and two of lower concern (one in each imager). The analysis effort began by analyzing one component, after which the analyses for the other components was deemed unnecessary. The purpose of this report is to document these findings as well as lessons learned from the effort.

  10. A noise power spectrum study of a new model-based iterative reconstruction system: Veo 3.0.

    PubMed

    Li, Guang; Liu, Xinming; Dodge, Cristina T; Jensen, Corey T; Rong, X John

    2016-09-08

    The purpose of this study was to evaluate performance of the third generation of model-based iterative reconstruction (MBIR) system, Veo 3.0, based on noise power spectrum (NPS) analysis with various clinical presets over a wide range of clinically applicable dose levels. A CatPhan 600 surrounded by an oval, fat-equivalent ring to mimic patient size/shape was scanned 10 times at each of six dose levels on a GE HD 750 scanner. NPS analysis was performed on images reconstructed with various Veo 3.0 preset combinations for comparisons of those images reconstructed using Veo 2.0, filtered back projection (FBP) and adaptive statistical iterative reconstruc-tion (ASiR). The new Target Thickness setting resulted in higher noise in thicker axial images. The new Texture Enhancement function achieved a more isotropic noise behavior with less image artifacts. Veo 3.0 provides additional reconstruction options designed to allow the user choice of balance between spatial resolution and image noise, relative to Veo 2.0. Veo 3.0 provides more user selectable options and in general improved isotropic noise behavior in comparison to Veo 2.0. The overall noise reduction performance of both versions of MBIR was improved in comparison to FBP and ASiR, especially at low-dose levels. © 2016 The Authors.

  11. Scaling Analysis of Ocean Surface Turbulent Heterogeneities from Satellite Remote Sensing: Use of 2D Structure Functions.

    PubMed

    Renosh, P R; Schmitt, Francois G; Loisel, Hubert

    2015-01-01

    Satellite remote sensing observations allow the ocean surface to be sampled synoptically over large spatio-temporal scales. The images provided from visible and thermal infrared satellite observations are widely used in physical, biological, and ecological oceanography. The present work proposes a method to understand the multi-scaling properties of satellite products such as the Chlorophyll-a (Chl-a), and the Sea Surface Temperature (SST), rarely studied. The specific objectives of this study are to show how the small scale heterogeneities of satellite images can be characterised using tools borrowed from the fields of turbulence. For that purpose, we show how the structure function, which is classically used in the frame of scaling time series analysis, can be used also in 2D. The main advantage of this method is that it can be applied to process images which have missing data. Based on both simulated and real images, we demonstrate that coarse-graining (CG) of a gradient modulus transform of the original image does not provide correct scaling exponents. We show, using a fractional Brownian simulation in 2D, that the structure function (SF) can be used with randomly sampled couple of points, and verify that 1 million of couple of points provides enough statistics.

  12. Automated identification of retained surgical items in radiological images

    NASA Astrophysics Data System (ADS)

    Agam, Gady; Gan, Lin; Moric, Mario; Gluncic, Vicko

    2015-03-01

    Retained surgical items (RSIs) in patients is a major operating room (OR) patient safety concern. An RSI is any surgical tool, sponge, needle or other item inadvertently left in a patients body during the course of surgery. If left undetected, RSIs may lead to serious negative health consequences such as sepsis, internal bleeding, and even death. To help physicians efficiently and effectively detect RSIs, we are developing computer-aided detection (CADe) software for X-ray (XR) image analysis, utilizing large amounts of currently available image data to produce a clinically effective RSI detection system. Physician analysis of XRs for the purpose of RSI detection is a relatively lengthy process that may take up to 45 minutes to complete. It is also error prone due to the relatively low acuity of the human eye for RSIs in XR images. The system we are developing is based on computer vision and machine learning algorithms. We address the problem of low incidence by proposing synthesis algorithms. The CADe software we are developing may be integrated into a picture archiving and communication system (PACS), be implemented as a stand-alone software application, or be integrated into portable XR machine software through application programming interfaces. Preliminary experimental results on actual XR images demonstrate the effectiveness of the proposed approach.

  13. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution.

    PubMed

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction.

  14. Metal artifact reduction in CT, a phantom study: subjective and objective evaluation of four commercial metal artifact reduction algorithms when used on three different orthopedic metal implants.

    PubMed

    Bolstad, Kirsten; Flatabø, Silje; Aadnevik, Daniel; Dalehaug, Ingvild; Vetti, Nils

    2018-01-01

    Background Metal implants may introduce severe artifacts in computed tomography (CT) images. Over the last few years dedicated algorithms have been developed in order to reduce metal artifacts in CT images. Purpose To investigate and compare metal artifact reduction algorithms (MARs) from four different CT vendors when imaging three different orthopedic metal implants. Material and Methods Three clinical metal implants were attached to the leg of an anthropomorphic phantom: cobalt-chrome; stainless steel; and titanium. Four commercial MARs were investigated: SmartMAR (GE); O-MAR (Philips); iMAR (Siemens); and SEMAR (Toshiba). The images were evaluated subjectively by three observers and analyzed objectively by calculating the fraction of pixels with CT number above 500 HU in a region of interest around the metal. The average CT number and image noise were also measured. Results Both subjective evaluation and objective analysis showed that MARs reduced metal artifacts and improved the image quality for CT images containing metal implants of steel and cobalt-chrome. When using MARs on titanium, all MARs introduced new visible artifacts. Conclusion The effect of MARs varied between CT vendors and different metal implants used in orthopedic surgery. Both in subjective evaluation and objective analysis the effect of applying MARs was most obvious on steel and cobalt-chrome implants when using SEMAR from Toshiba followed by SmartMAR from GE. However, MARs may also introduce new image artifacts especially when used on titanium implants. Therefore, it is important to reconstruct all CT images containing metal with and without MARs.

  15. Label-free imaging of trabecular meshwork cells using Coherent Anti-Stokes Raman Scattering (CARS) microscopy

    PubMed Central

    Lei, Tim C.; Ammar, David A.; Masihzadeh, Omid; Gibson, Emily A.

    2011-01-01

    Purpose To image the human trabecular meshwork (TM) using a non-invasive, non-destructive technique without the application of exogenous label. Methods Flat-mounted TM samples from a human cadaver eye were imaged using two nonlinear optical techniques: coherent anti-Stokes Raman scattering (CARS) and two-photon autofluorescence (TPAF). In TPAF, two optical photons are simultaneously absorbed and excite molecules in the sample that then emit a higher energy photon. The signal is predominately from collagen and elastin. The CARS technique uses two laser frequencies to specifically excite carbon-hydrogen bonds, allowing the visualization of lipid-rich cell membranes. Multiple images were taken along an axis perpendicular to the surface of the TM for subsequent analysis. Results Analysis of multiple TPAF images of the TM reveals the characteristic overlapping bundles of collagen of various sizes. Simultaneous CARS imaging revealed elliptical structures of ~7×10 µm in diameter populating the meshwork which were consistent with TM cells. Irregularly shaped objects of ~4 µm diameter appeared in both the TPAF and CARS channels, and are consistent with melanin granules. Conclusions CARS techniques were successful in imaging live TM cells in freshly isolated human TM samples. Similar images have been obtained with standard histological techniques, however the method described here has the advantage of being performed on unprocessed, unfixed tissue free from the potential distortions of the fine tissue morphology that can occur due to infusion of fixatives and treatment with alcohols. CARS imaging of the TM represents a new avenue for exploring details of aqueous outflow and TM cell physiology. PMID:22025898

  16. Analysis of Fundus Fluorescein Angiogram Based on the Hessian Matrix of Directional Curvelet Sub-bands and Distance Regularized Level Set Evolution

    PubMed Central

    Soltanipour, Asieh; Sadri, Saeed; Rabbani, Hossein; Akhlaghi, Mohammad Reza

    2015-01-01

    This paper presents a new procedure for automatic extraction of the blood vessels and optic disk (OD) in fundus fluorescein angiogram (FFA). In order to extract blood vessel centerlines, the algorithm of vessel extraction starts with the analysis of directional images resulting from sub-bands of fast discrete curvelet transform (FDCT) in the similar directions and different scales. For this purpose, each directional image is processed by using information of the first order derivative and eigenvalues obtained from the Hessian matrix. The final vessel segmentation is obtained using a simple region growing algorithm iteratively, which merges centerline images with the contents of images resulting from modified top-hat transform followed by bit plane slicing. After extracting blood vessels from FFA image, candidates regions for OD are enhanced by removing blood vessels from the FFA image, using multi-structure elements morphology, and modification of FDCT coefficients. Then, canny edge detector and Hough transform are applied to the reconstructed image to extract the boundary of candidate regions. At the next step, the information of the main arc of the retinal vessels surrounding the OD region is used to extract the actual location of the OD. Finally, the OD boundary is detected by applying distance regularized level set evolution. The proposed method was tested on the FFA images from angiography unit of Isfahan Feiz Hospital, containing 70 FFA images from different diabetic retinopathy stages. The experimental results show the accuracy more than 93% for vessel segmentation and more than 87% for OD boundary extraction. PMID:26284170

  17. An audit of imaging test utilization for the management of lymphoma in an oncology hospital: implications for resource planning?

    PubMed

    Schwartz, A; Gospodarowicz, M K; Khalili, K; Pintilie, M; Goddard, S; Keller, A; Tsang, R W

    2006-02-01

    The purpose of this study was to assist with resource planning by examining the pattern of physician utilization of imaging procedures for lymphoma patients in a dedicated oncology hospital. The proportion of imaging tests ordered for routine follow up with no specific clinical indication was quantified, with specific attention to CT scans. A 3-month audit was performed. The reasons for ordering all imaging procedures (X-rays, CT scans, ultrasound, nuclear scan and MRI) were determined through a retrospective chart review. 411 lymphoma patients had 686 assessments (sets of imaging tests) and 981 procedures (individual imaging tests). Most procedures were CT scans (52%) and chest radiographs (30%). The most common reasons for ordering imaging were assessing response (23%), and investigating new symptoms (19%). Routine follow up constituted 21% of the assessments (142/686), and of these, 82% were chest radiographs (116/142), while 24% (34/142) were CT scans. With analysis restricted to CT scans (296 assessments in 248 patients), the most common reason for ordering CT scans were response evaluation (40%), and suspicion of recurrence and/or new symptom (23%). Follow-up CT scans done with no clinical indication comprised 8% (25/296) of all CT assessments. Staging CT scans were under-represented at 6% of all assessments. Imaging with CT scans for follow up of asymptomatic patients is infrequent. However, scans done for staging new lymphoma patients were unexpectedly low in frequency, due to scans done elsewhere prior to referral. This analysis uncovered utilization patterns, helped resource planning and provided data to reduce unnecessary imaging procedures.

  18. Multivariate image analysis of laser-induced photothermal imaging used for detection of caries tooth

    NASA Astrophysics Data System (ADS)

    El-Sherif, Ashraf F.; Abdel Aziz, Wessam M.; El-Sharkawy, Yasser H.

    2010-08-01

    Time-resolved photothermal imaging has been investigated to characterize tooth for the purpose of discriminating between normal and caries areas of the hard tissue using thermal camera. Ultrasonic thermoelastic waves were generated in hard tissue by the absorption of fiber-coupled Q-switched Nd:YAG laser pulses operating at 1064 nm in conjunction with a laser-induced photothermal technique used to detect the thermal radiation waves for diagnosis of human tooth. The concepts behind the use of photo-thermal techniques for off-line detection of caries tooth features were presented by our group in earlier work. This paper illustrates the application of multivariate image analysis (MIA) techniques to detect the presence of caries tooth. MIA is used to rapidly detect the presence and quantity of common caries tooth features as they scanned by the high resolution color (RGB) thermal cameras. Multivariate principal component analysis is used to decompose the acquired three-channel tooth images into a two dimensional principal components (PC) space. Masking score point clusters in the score space and highlighting corresponding pixels in the image space of the two dominant PCs enables isolation of caries defect pixels based on contrast and color information. The technique provides a qualitative result that can be used for early stage caries tooth detection. The proposed technique can potentially be used on-line or real-time resolved to prescreen the existence of caries through vision based systems like real-time thermal camera. Experimental results on the large number of extracted teeth as well as one of the thermal image panoramas of the human teeth voltanteer are investigated and presented.

  19. Molecular Imaging of Vasa Vasorum Neovascularization via DEspR-targeted Contrast-enhanced Ultrasound Micro-imaging in Transgenic Atherosclerosis Rat Model

    PubMed Central

    Decano, Julius L.; Moran, Anne Marie; Ruiz-Opazo, Nelson; Herrera, Victoria L. M.

    2011-01-01

    Purpose Given that carotid vasa vasorum neovascularization is associated with increased risk for stroke and cardiac events, the present in vivo study was designed to investigate molecular imaging of carotid artery vasa vasorum neovascularization via target-specific contrast-enhanced ultrasound (CEU) micro-imaging. Procedures Molecular imaging was performed in male transgenic rats with carotid artery disease and non-transgenic controls using dual endothelin1/VEGFsp receptor (DEspR)-targeted microbubbles (MBD) and the Vevo770 micro-imaging system and CEU imaging software. Results DEspR-targeted CEU-positive imaging exhibited significantly higher contrast intensity signal (CIS)-levels and pre-/post-destruction CIS-differences in seven of 13 transgenic rats, in contrast to significantly lower CIS-levels and differences in control isotype-targeted microbubble (MBC)-CEU imaging (n =8) and in MBD CEU-imaging of five non-transgenic control rats (P<0.0001). Ex vivo immunofluorescence analysis demonstrated binding of MBD to DEspR-positive endothelial cells; and association of DEspR-targeted increased contrast intensity signals with DEspR expression in vasa vasorum neovessel and intimal lesions. In vitro analysis demonstrated dose-dependent binding of MBD to DEspR-positive human endothelial cells with increasing %cells bound and number of MBD per cell, in contrast to MBC or non-labeled microbubbles (P<0.0001). Conclusion In vivo DEspR-targeted molecular imaging detected increased DEspR-expression in carotid artery lesions and in expanded vasa vasorum neovessels in transgenic rats with carotid artery disease. Future studies are needed to determine predictive value for stroke or heart disease in this transgenic atherosclerosis rat model and translational applications. PMID:20972637

  20. Image analysis of skin color heterogeneity focusing on skin chromophores and the age-related changes in facial skin.

    PubMed

    Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji

    2015-05-01

    Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Seamless stitching of tile scan microscope images.

    PubMed

    Legesse, F B; Chernavskaia, O; Heuke, S; Bocklitz, T; Meyer, T; Popp, J; Heintzmann, R

    2015-06-01

    For diagnostic purposes, optical imaging techniques need to obtain high-resolution images of extended biological specimens in reasonable time. The field of view of an objective lens, however, is often smaller than the sample size. To image the whole sample, laser scanning microscopes acquire tile scans that are stitched into larger mosaics. The appearance of such image mosaics is affected by visible edge artefacts that arise from various optical aberrations which manifest in grey level jumps across tile boundaries. In this contribution, a technique for stitching tiles into a seamless mosaic is presented. The stitching algorithm operates by equilibrating neighbouring edges and forcing the brightness at corners to a common value. The corrected image mosaics appear to be free from stitching artefacts and are, therefore, suited for further image analysis procedures. The contribution presents a novel method to seamlessly stitch tiles captured by a laser scanning microscope into a large mosaic. The motivation for the work is the failure of currently existing methods for stitching nonlinear, multimodal images captured by our microscopic setups. Our method eliminates the visible edge artefacts that appear between neighbouring tiles by taking into account the overall illumination differences among tiles in such mosaics. The algorithm first corrects the nonuniform brightness that exists within each of the tiles. It then compensates for grey level differences across tile boundaries by equilibrating neighbouring edges and forcing the brightness at the corners to a common value. After these artefacts have been removed further image analysis procedures can be applied on the microscopic images. Even though the solution presented here is tailored for the aforementioned specific case, it could be easily adapted to other contexts where image tiles are assembled into mosaics such as in astronomical or satellite photos. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  2. Measurement of Postmortem Pupil Size: A New Method with Excellent Reliability and Its Application to Pupil Changes in the Early Postmortem Period.

    PubMed

    Fleischer, Luise; Sehner, Susanne; Gehl, Axel; Riemer, Martin; Raupach, Tobias; Anders, Sven

    2017-05-01

    Measurement of postmortem pupil width is a potential component of death time estimation. However, no standardized measurement method has been described. We analyzed a total of 71 digital images for pupil-iris ratio using the software ImageJ. Images were analyzed three times by four different examiners. In addition, serial images from 10 cases were taken between 2 and 50 h postmortem to detect spontaneous pupil changes. Intra- and inter-rater reliability of the method was excellent (ICC > 0.95). The method is observer independent and yields consistent results, and images can be digitally stored and re-evaluated. The method seems highly eligible for forensic and scientific purposes. While statistical analysis of spontaneous pupil changes revealed a significant polynomial of quartic degree for postmortem time (p = 0.001), an obvious pattern was not detected. These results do not indicate suitability of spontaneous pupil changes for forensic death time estimation, as formerly suggested. © 2016 American Academy of Forensic Sciences.

  3. Face detection in color images using skin color, Laplacian of Gaussian, and Euler number

    NASA Astrophysics Data System (ADS)

    Saligrama Sundara Raman, Shylaja; Kannanedhi Narasimha Sastry, Balasubramanya Murthy; Subramanyam, Natarajan; Senkutuvan, Ramya; Srikanth, Radhika; John, Nikita; Rao, Prateek

    2010-02-01

    In this a paper, a feature based approach to face detection has been proposed using an ensemble of algorithms. The method uses chrominance values and edge features to classify the image as skin and nonskin regions. The edge detector used for this purpose is Laplacian of Gaussian (LoG) which is found to be appropriate when images having multiple faces with noise in them. Eight connectivity analysis of these regions will segregate them as probable face or nonface. The procedure is made more robust by identifying local features within these skin regions which include number of holes, percentage of skin and the golden ratio. The method proposed has been tested on color face images of various races obtained from different sources and its performance is found to be encouraging as the color segmentation cleans up almost all the complex facial features. The result obtained has a calculated accuracy of 86.5% on a test set of 230 images.

  4. Application of ultrasound processed images in space: Quanitative assessment of diffuse affectations

    NASA Astrophysics Data System (ADS)

    Pérez-Poch, A.; Bru, C.; Nicolau, C.

    The purpose of this study was to evaluate diffuse affectations in the liver using texture image processing techniques. Ultrasound diagnose equipments are the election of choice to be used in space environments as they are free from hazardous effects on health. However, due to the need for highly trained radiologists to assess the images, this imaging method is mainly applied on focal lesions rather than on non-focal ones. We have conducted a clinical study on 72 patients with different degrees of chronic hepatopaties and a group of control of 18 individuals. All subjects' clinical reports and results of biopsies were compared to the degree of affectation calculated by our computer system , thus validating the method. Full statistical results are given in the present paper showing a good correlation (r=0.61) between pathologist's report and analysis of the heterogenicity of the processed images from the liver. This computer system to analyze diffuse affectations may be used in-situ or via telemedicine to the ground.

  5. Application of ultrasound processed images in space: assessing diffuse affectations

    NASA Astrophysics Data System (ADS)

    Pérez-Poch, A.; Bru, C.; Nicolau, C.

    The purpose of this study was to evaluate diffuse affectations in the liver using texture image processing techniques. Ultrasound diagnose equipments are the election of choice to be used in space environments as they are free from hazardous effects on health. However, due to the need for highly trained radiologists to assess the images, this imaging method is mainly applied on focal lesions rather than on non-focal ones. We have conducted a clinical study on 72 patients with different degrees of chronic hepatopaties and a group of control of 18 individuals. All subjects' clinical reports and results of biopsies were compared to the degree of affectation calculated by our computer system , thus validating the method. Full statistical results are given in the present paper showing a good correlation (r=0.61) between pathologist's report and analysis of the heterogenicity of the processed images from the liver. This computer system to analyze diffuse affectations may be used in-situ or via telemedicine to the ground.

  6. SU-G-JeP4-05: Effects of Irregular Respiratory Motion On the Positioning Accuracy of Moving Target with Free Breathing Cone-Beam Computerized Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, X; Xiong, W; Gewanter, R

    Purpose: Average or maximum intensity projection (AIP or MIP) images derived from 4DCT images are often used as a reference image for target alignment when free breathing Cone-beam CT (FBCBCT) is used for positioning a moving target at treatment. This method can be highly accurate if the patient has stable respiratory motion. However, a patient’s breathing pattern often varies irregularly. The purpose of this study is to investigate the effect of irregular respiration on the positioning accuracy of a moving target with FBCBCT. Methods: Eight patients’ respiratory motion curves were selected to drive a Quasar phantom with embedded cubic andmore » spherical targets. A 4DCT of the moving phantom was acquired on a CT scanner (Philips Brilliance 16) equipped with a Varian RPM system. The phase binned 4DCT images and the corresponding MIP and AIP images were transferred into Eclipse for analysis. CBCTs of the phantom driven by the same breathing curves were acquired on a Varian TrueBeam and fused such that the zero positions of moving targets are the same on both CBCT and AIP images. The sphere and cube volumes and centrioid differences (alignment error) determined by MIP, AIP and FBCBCT images were compared. Results: Compared to the volume determined by FBCBCT, the volumes of cube and sphere in MIP images were 22.4%±8.8% and 34.2%±6.2% larger while the volumes in AIP images were 7.1%±6.2% and 2.7%±15.3% larger, respectively. The alignment errors for the cube and sphere with center-center matches between MIP and FBCBCT were 3.5±3.1mm and 3.2±2.3mm, and the alignment errors between AIP and FBCBCT were 2.1±2.6mm and 2.1±1.7mm, respectively. Conclusion: AIP images appear to be superior reference images than MIP images. However, irregular respiratory motions could compromise the positioning accuracy of a moving target if the target center-center match is used to align FBCBCT and AIP images.« less

  7. The Practical Application of Promoting Positive Body Image on a College Campus: Insights from Freshmen Women

    ERIC Educational Resources Information Center

    Smith-Jackson, TeriSue; Reel, Justine J.; Thackeray, Rosemary

    2014-01-01

    Background: Body image disturbances and disordered eating behaviors are prevalent across college campuses and can lead to psychological and physical health consequences. Purpose: The purpose of this study was to gain formative research on the promotion of positive body image on a university campus with the goal of developing educational programs.…

  8. Accuracy of DSM based on digital aerial image matching. (Polish Title: Dokładność NMPT tworzonego metodą automatycznego dopasowania cyfrowych zdjęć lotniczych)

    NASA Astrophysics Data System (ADS)

    Kubalska, J. L.; Preuss, R.

    2013-12-01

    Digital Surface Models (DSM) are used in GIS data bases as single product more often. They are also necessary to create other products such as3D city models, true-ortho and object-oriented classification. This article presents results of DSM generation for classification of vegetation in urban areas. Source data allowed producing DSM with using of image matching method and ALS data. The creation of DSM from digital images, obtained by Ultra Cam-D digital Vexcel camera, was carried out in Match-T by INPHO. This program optimizes the configuration of images matching process, which ensures high accuracy and minimize gap areas. The analysis of the accuracy of this process was made by comparison of DSM generated in Match-T with DSM generated from ALS data. Because of further purpose of generated DSM it was decided to create model in GRID structure with cell size of 1 m. With this parameter differential model from both DSMs was also built that allowed determining the relative accuracy of the compared models. The analysis indicates that the generation of DSM with multi-image matching method is competitive for the same surface model creation from ALS data. Thus, when digital images with high overlap are available, the additional registration of ALS data seems to be unnecessary.

  9. Image-Based Reconstruction and Analysis of Dynamic Scenes in a Landslide Simulation Facility

    NASA Astrophysics Data System (ADS)

    Scaioni, M.; Crippa, J.; Longoni, L.; Papini, M.; Zanzi, L.

    2017-12-01

    The application of image processing and photogrammetric techniques to dynamic reconstruction of landslide simulations in a scaled-down facility is described. Simulations are also used here for active-learning purpose: students are helped understand how physical processes happen and which kinds of observations may be obtained from a sensor network. In particular, the use of digital images to obtain multi-temporal information is presented. On one side, using a multi-view sensor set up based on four synchronized GoPro 4 Black® cameras, a 4D (3D spatial position and time) reconstruction of the dynamic scene is obtained through the composition of several 3D models obtained from dense image matching. The final textured 4D model allows one to revisit in dynamic and interactive mode a completed experiment at any time. On the other side, a digital image correlation (DIC) technique has been used to track surface point displacements from the image sequence obtained from the camera in front of the simulation facility. While the 4D model may provide a qualitative description and documentation of the experiment running, DIC analysis output quantitative information such as local point displacements and velocities, to be related to physical processes and to other observations. All the hardware and software equipment adopted for the photogrammetric reconstruction has been based on low-cost and open-source solutions.

  10. Compact Video Microscope Imaging System Implemented in Colloid Studies

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2002-01-01

    Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.

  11. The application of time series models to cloud field morphology analysis

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.

    1987-01-01

    A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.

  12. A general-purpose computer program for studying ultrasonic beam patterns generated with acoustic lenses

    NASA Technical Reports Server (NTRS)

    Roberti, Dino; Ludwig, Reinhold; Looft, Fred J.

    1988-01-01

    A 3-D computer model of a piston radiator with lenses for focusing and defocusing is presented. To achieve high-resolution imaging, the frequency of the transmitted and received ultrasound must be as high as 10 MHz. Current ultrasonic transducers produce an extremely narrow beam at these high frequencies and thus are not appropriate for imaging schemes such as synthetic-aperture focus techniques (SAFT). Consequently, a numerical analysis program has been developed to determine field intensity patterns that are radiated from ultrasonic transducers with lenses. Lens shapes are described and the field intensities are numerically predicted and compared with experimental results.

  13. Fractal analysis of phasic laser images of the myocardium for the purpose of diagnostics of acute coronary insufficiency

    NASA Astrophysics Data System (ADS)

    Wanchuliak, O. Y.; Bachinskyi, V. T.

    2011-09-01

    In this work on the base of Mueller-matrix description of optical anisotropy, the possibility of monitoring of time changes of myocardium tissue birefringence, has been considered. The optical model of polycrystalline networks of myocardium is suggested. The results of investigating the interrelation between the values correlation (correlation area, asymmetry coefficient and autocorrelation function excess) and fractal (dispersion of logarithmic dependencies of power spectra) parameters are presented. They characterize the distributions of Mueller matrix elements in the points of laser images of myocardium histological sections. The criteria of differentiation of death coming reasons are determined.

  14. Automatic quantitative analysis of in-stent restenosis using FD-OCT in vivo intra-arterial imaging.

    PubMed

    Mandelias, Kostas; Tsantis, Stavros; Spiliopoulos, Stavros; Katsakiori, Paraskevi F; Karnabatidis, Dimitris; Nikiforidis, George C; Kagadis, George C

    2013-06-01

    A new segmentation technique is implemented for automatic lumen area extraction and stent strut detection in intravascular optical coherence tomography (OCT) images for the purpose of quantitative analysis of in-stent restenosis (ISR). In addition, a user-friendly graphical user interface (GUI) is developed based on the employed algorithm toward clinical use. Four clinical datasets of frequency-domain OCT scans of the human femoral artery were analyzed. First, a segmentation method based on fuzzy C means (FCM) clustering and wavelet transform (WT) was applied toward inner luminal contour extraction. Subsequently, stent strut positions were detected by utilizing metrics derived from the local maxima of the wavelet transform into the FCM membership function. The inner lumen contour and the position of stent strut were extracted with high precision. Compared to manual segmentation by an expert physician, the automatic lumen contour delineation had an average overlap value of 0.917 ± 0.065 for all OCT images included in the study. The strut detection procedure achieved an overall accuracy of 93.80% and successfully identified 9.57 ± 0.5 struts for every OCT image. Processing time was confined to approximately 2.5 s per OCT frame. A new fast and robust automatic segmentation technique combining FCM and WT for lumen border extraction and strut detection in intravascular OCT images was designed and implemented. The proposed algorithm integrated in a GUI represents a step forward toward the employment of automated quantitative analysis of ISR in clinical practice.

  15. Fibered fluorescence microscopy (FFM) of intra epidermal nerve fibers--translational marker for peripheral neuropathies in preclinical research: processing and analysis of the data

    NASA Astrophysics Data System (ADS)

    Cornelissen, Frans; De Backer, Steve; Lemeire, Jan; Torfs, Berf; Nuydens, Rony; Meert, Theo; Schelkens, Peter; Scheunders, Paul

    2008-08-01

    Peripheral neuropathy can be caused by diabetes or AIDS or be a side-effect of chemotherapy. Fibered Fluorescence Microscopy (FFM) is a recently developed imaging modality using a fiber optic probe connected to a laser scanning unit. It allows for in-vivo scanning of small animal subjects by moving the probe along the tissue surface. In preclinical research, FFM enables non-invasive, longitudinal in vivo assessment of intra epidermal nerve fibre density in various models for peripheral neuropathies. By moving the probe, FFM allows visualization of larger surfaces, since, during the movement, images are continuously captured, allowing to acquire an area larger then the field of view of the probe. For analysis purposes, we need to obtain a single static image from the multiple overlapping frames. We introduce a mosaicing procedure for this kind of video sequence. Construction of mosaic images with sub-pixel alignment is indispensable and must be integrated into a global consistent image aligning. An additional motivation for the mosaicing is the use of overlapping redundant information to improve the signal to noise ratio of the acquisition, because the individual frames tend to have both high noise levels and intensity inhomogeneities. For longitudinal analysis, mosaics captured at different times must be aligned as well. For alignment, global correlation-based matching is compared with interest point matching. Use of algorithms working on multiple CPU's (parallel processor/cluster/grid) is imperative for use in a screening model.

  16. Biometric analysis of the palm vein distribution by means two different techniques of feature extraction

    NASA Astrophysics Data System (ADS)

    Castro-Ortega, R.; Toxqui-Quitl, C.; Solís-Villarreal, J.; Padilla-Vivanco, A.; Castro-Ramos, J.

    2014-09-01

    Vein patterns can be used for accessing, identifying, and authenticating purposes; which are more reliable than classical identification way. Furthermore, these patterns can be used for venipuncture in health fields to get on to veins of patients when they cannot be seen with the naked eye. In this paper, an image acquisition system is implemented in order to acquire digital images of people hands in the near infrared. The image acquisition system consists of a CCD camera and a light source with peak emission in the 880 nm. This radiation can penetrate and can be strongly absorbed by the desoxyhemoglobin that is presented in the blood of the veins. Our method of analysis is composed by several steps and the first one of all is the enhancement of acquired images which is implemented by spatial filters. After that, adaptive thresholding and mathematical morphology operations are used in order to obtain the distribution of vein patterns. The above process is focused on the people recognition through of images of their palm-dorsal distributions obtained from the near infrared light. This work has been directed for doing a comparison of two different techniques of feature extraction as moments and veincode. The classification task is achieved using Artificial Neural Networks. Two databases are used for the analysis of the performance of the algorithms. The first database used here is owned of the Hong Kong Polytechnic University and the second one is our own database.

  17. Revisiting the immune microenvironment of diffuse large B-cell lymphoma using a tissue microarray and immunohistochemistry: robust semi-automated analysis reveals CD3 and FoxP3 as potential predictors of response to R-CHOP

    PubMed Central

    Coutinho, Rita; Clear, Andrew J.; Mazzola, Emanuele; Owen, Andrew; Greaves, Paul; Wilson, Andrew; Matthews, Janet; Lee, Abigail; Alvarez, Rute; da Silva, Maria Gomes; Cabeçadas, José; Neuberg, Donna; Calaminici, Maria; Gribben, John G.

    2015-01-01

    Gene expression studies have identified the microenvironment as a prognostic player in diffuse large B-cell lymphoma. However, there is a lack of simple immune biomarkers that can be applied in the clinical setting and could be helpful in stratifying patients. Immunohistochemistry has been used for this purpose but the results are inconsistent. We decided to reinvestigate the immune microenvironment and its impact using immunohistochemistry, with two systems of image analysis, in a large set of patients with diffuse large B-cell lymphoma. Diagnostic tissue from 309 patients was arrayed onto tissue microarrays. Results from 161 chemoimmunotherapy-treated patients were used for outcome prediction. Positive cells, percentage stained area and numbers of pixels/area were quantified and results were compared with the purpose of inferring consistency between the two semi-automated systems. Measurement cutpoints were assessed using a recursive partitioning algorithm classifying results according to survival. Kaplan-Meier estimators and Fisher exact tests were evaluated to check for significant differences between measurement classes, and for dependence between pairs of measurements, respectively. Results were validated by multivariate analysis incorporating the International Prognostic Index. The concordance between the two systems of image analysis was surprisingly high, supporting their applicability for immunohistochemistry studies. Patients with a high density of CD3 and FoxP3 by both methods had a better outcome. Automated analysis should be the preferred method for immunohistochemistry studies. Following the use of two methods of semi-automated analysis we suggest that CD3 and FoxP3 play a role in predicting response to chemoimmunotherapy in diffuse large B-cell lymphoma. PMID:25425693

  18. Revisiting the immune microenvironment of diffuse large B-cell lymphoma using a tissue microarray and immunohistochemistry: robust semi-automated analysis reveals CD3 and FoxP3 as potential predictors of response to R-CHOP.

    PubMed

    Coutinho, Rita; Clear, Andrew J; Mazzola, Emanuele; Owen, Andrew; Greaves, Paul; Wilson, Andrew; Matthews, Janet; Lee, Abigail; Alvarez, Rute; da Silva, Maria Gomes; Cabeçadas, José; Neuberg, Donna; Calaminici, Maria; Gribben, John G

    2015-03-01

    Gene expression studies have identified the microenvironment as a prognostic player in diffuse large B-cell lymphoma. However, there is a lack of simple immune biomarkers that can be applied in the clinical setting and could be helpful in stratifying patients. Immunohistochemistry has been used for this purpose but the results are inconsistent. We decided to reinvestigate the immune microenvironment and its impact using immunohistochemistry, with two systems of image analysis, in a large set of patients with diffuse large B-cell lymphoma. Diagnostic tissue from 309 patients was arrayed onto tissue microarrays. Results from 161 chemoimmunotherapy-treated patients were used for outcome prediction. Positive cells, percentage stained area and numbers of pixels/area were quantified and results were compared with the purpose of inferring consistency between the two semi-automated systems. Measurement cutpoints were assessed using a recursive partitioning algorithm classifying results according to survival. Kaplan-Meier estimators and Fisher exact tests were evaluated to check for significant differences between measurement classes, and for dependence between pairs of measurements, respectively. Results were validated by multivariate analysis incorporating the International Prognostic Index. The concordance between the two systems of image analysis was surprisingly high, supporting their applicability for immunohistochemistry studies. Patients with a high density of CD3 and FoxP3 by both methods had a better outcome. Automated analysis should be the preferred method for immunohistochemistry studies. Following the use of two methods of semi-automated analysis we suggest that CD3 and FoxP3 play a role in predicting response to chemoimmunotherapy in diffuse large B-cell lymphoma. Copyright© Ferrata Storti Foundation.

  19. Metrological digital audio reconstruction

    DOEpatents

    Fadeyev,; Vitaliy, Haber [Berkeley, CA; Carl, [Berkeley, CA

    2004-02-19

    Audio information stored in the undulations of grooves in a medium such as a phonograph record may be reconstructed, with little or no contact, by measuring the groove shape using precision metrology methods coupled with digital image processing and numerical analysis. The effects of damage, wear, and contamination may be compensated, in many cases, through image processing and analysis methods. The speed and data handling capacity of available computing hardware make this approach practical. Two examples used a general purpose optical metrology system to study a 50 year old 78 r.p.m. phonograph record and a commercial confocal scanning probe to study a 1920's celluloid Edison cylinder. Comparisons are presented with stylus playback of the samples and with a digitally re-mastered version of an original magnetic recording. There is also a more extensive implementation of this approach, with dedicated hardware and software.

  20. Imaging quality analysis of multi-channel scanning radiometer

    NASA Astrophysics Data System (ADS)

    Fan, Hong; Xu, Wujun; Wang, Chengliang

    2008-03-01

    Multi-channel scanning radiometer, on boarding FY-2 geostationary meteorological satellite, plays a key role in remote sensing because of its wide field of view and continuous multi-spectral images acquirements. It is significant to evaluate image quality after performance parameters of the imaging system are validated. Several methods of evaluating imaging quality are discussed. Of these methods, the most fundamental is the MTF. The MTF of photoelectric scanning remote instrument, in the scanning direction, is the multiplication of optics transfer function (OTF), detector transfer function (DTF) and electronics transfer function (ETF). For image motion compensation, moving speed of scanning mirror should be considered. The optical MTF measurement is performed in both the EAST/WEST and NORTH/SOUTH direction, whose values are used for alignment purposes and are used to determine the general health of the instrument during integration and testing. Imaging systems cannot perfectly reproduce what they see and end up "blurring" the image. Many parts of the imaging system can cause blurring. Among these are the optical elements, the sampling of the detector itself, post-processing, or the earth's atmosphere for systems that image through it. Through theory calculation and actual measurement, it is proved that DTF and ETF are the main factors of system MTF and the imaging quality can satisfy the requirement of instrument design.

  1. Analysis of wavelet technology for NASA applications

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    The purpose of this grant was to introduce a broad group of NASA researchers and administrators to wavelet technology and to determine its future role in research and development at NASA JSC. The activities of several briefings held between NASA JSC scientists and Rice University researchers are discussed. An attached paper, 'Recent Advances in Wavelet Technology', summarizes some aspects of these briefings. Two proposals submitted to NASA reflect the primary areas of common interest. They are image analysis and numerical solutions of partial differential equations arising in computational fluid dynamics and structural mechanics.

  2. Quantitative correlational study of microbubble-enhanced ultrasound imaging and magnetic resonance imaging of glioma and early response to radiotherapy in a rat model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Chen; Lee, Dong-Hoon; Zhang, Kai

    Purpose: Radiotherapy remains a major treatment method for malignant tumors. Magnetic resonance imaging (MRI) is the standard modality for assessing glioma treatment response in the clinic. Compared to MRI, ultrasound imaging is low-cost and portable and can be used during intraoperative procedures. The purpose of this study was to quantitatively compare contrast-enhanced ultrasound (CEUS) imaging and MRI of irradiated gliomas in rats and to determine which quantitative ultrasound imaging parameters can be used for the assessment of early response to radiation in glioma. Methods: Thirteen nude rats with U87 glioma were used. A small thinned skull window preparation was performedmore » to facilitate ultrasound imaging and mimic intraoperative procedures. Both CEUS and MRI with structural, functional, and molecular imaging parameters were performed at preradiation and at 1 day and 4 days postradiation. Statistical analysis was performed to determine the correlations between MRI and CEUS parameters and the changes between pre- and postradiation imaging. Results: Area under the curve (AUC) in CEUS showed significant difference between preradiation and 4 days postradiation, along with four MRI parameters, T{sub 2}, apparent diffusion coefficient, cerebral blood flow, and amide proton transfer-weighted (APTw) (all p < 0.05). The APTw signal was correlated with three CEUS parameters, rise time (r = − 0.527, p < 0.05), time to peak (r = − 0.501, p < 0.05), and perfusion index (r = 458, p < 0.05). Cerebral blood flow was correlated with rise time (r = − 0.589, p < 0.01) and time to peak (r = − 0.543, p < 0.05). Conclusions: MRI can be used for the assessment of radiotherapy treatment response and CEUS with AUC as a new technique and can also be one of the assessment methods for early response to radiation in glioma.« less

  3. Measurement, time-stamping, and analysis of electrodermal activity in fMRI

    NASA Astrophysics Data System (ADS)

    Smyser, Christopher; Grabowski, Thomas J.; Rainville, Pierre; Bechara, Antione; Razavi, Mehrdad; Mehta, Sonya; Eaton, Brent L.; Bolinger, Lizann

    2002-04-01

    A low cost fMRI-compatible system was developed for detecting electrodermal activity without inducing image artifact. Subject electrodermal activity was measured on the plantar surface of the foot using a standard recording circuit. Filtered analog skin conductance responses (SCR) were recorded with a general purpose, time-stamping data acquisition system. A conditioning paradigm involving painful thermal stimulation was used to demonstrate SCR detection and investigate neural correlates of conditioned autonomic activity. 128x128 pixel EPI-BOLD images were acquired with a GE 1.5T Signa scanner. Image analysis was performed using voxel-wise multiple linear regression. The covariate of interest was generated by convolving stimulus event onset with a standard hemodynamic response function. The function was time-shifted to determine optimal activation. Significance was tested using the t-statistic. Image quality was unaffected by the device, and conditioned and unconditioned SCRs were successfully detected. Conditioned SCRs correlated significantly with activity in the right anterior insular cortex. The effect was more robust when responses were scaled by SCR amplitude. The ability to measure and time register SCRs during fMRI acquisition enables studies of cognitive processes marked by autonomic activity, including those involving decision-making, pain, emotion, and addiction.

  4. Distributed memory parallel Markov random fields using graph partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinemann, C.; Perciano, T.; Ushizima, D.

    Markov random fields (MRF) based algorithms have attracted a large amount of interest in image analysis due to their ability to exploit contextual information about data. Image data generated by experimental facilities, though, continues to grow larger and more complex, making it more difficult to analyze in a reasonable amount of time. Applying image processing algorithms to large datasets requires alternative approaches to circumvent performance problems. Aiming to provide scientists with a new tool to recover valuable information from such datasets, we developed a general purpose distributed memory parallel MRF-based image analysis framework (MPI-PMRF). MPI-PMRF overcomes performance and memory limitationsmore » by distributing data and computations across processors. The proposed approach was successfully tested with synthetic and experimental datasets. Additionally, the performance of the MPI-PMRF framework is analyzed through a detailed scalability study. We show that a performance increase is obtained while maintaining an accuracy of the segmentation results higher than 98%. The contributions of this paper are: (a) development of a distributed memory MRF framework; (b) measurement of the performance increase of the proposed approach; (c) verification of segmentation accuracy in both synthetic and experimental, real-world datasets« less

  5. Three-dimensional segmentation of the tumor mass in computed tomographic images of neuroblastoma

    NASA Astrophysics Data System (ADS)

    Deglint, Hanford J.; Rangayyan, Rangaraj M.; Boag, Graham S.

    2004-05-01

    Tumor definition and diagnosis require the analysis of the spatial distribution and Hounsfield unit (HU) values of voxels in computed tomography (CT) images, coupled with a knowledge of normal anatomy. Segmentation of the tumor in neuroblastoma is complicated by the fact that the mass is almost always heterogeneous in nature; furthermore, viable tumor, necrosis, fibrosis, and normal tissue are often intermixed. Rather than attempt to separate these tissue types into distinct regions, we propose to explore methods to delineate the normal structures expected in abdominal CT images, remove them from further consideration, and examine the remaining parts of the images for the tumor mass. We explore the use of fuzzy connectivity for this purpose. Expert knowledge provided by the radiologist in the form of the expected structures and their shapes, HU values, and radiological characteristics are also incorporated in the segmentation algorithm. Segmentation and analysis of the tissue composition of the tumor can assist in quantitative assessment of the response to chemotherapy and in the planning of delayed surgery for resection of the tumor. The performance of the algorithm is evaluated using cases acquired from the Alberta Children's Hospital.

  6. A review of multivariate methods in brain imaging data fusion

    NASA Astrophysics Data System (ADS)

    Sui, Jing; Adali, Tülay; Li, Yi-Ou; Yang, Honghui; Calhoun, Vince D.

    2010-03-01

    On joint analysis of multi-task brain imaging data sets, a variety of multivariate methods have shown their strengths and been applied to achieve different purposes based on their respective assumptions. In this paper, we provide a comprehensive review on optimization assumptions of six data fusion models, including 1) four blind methods: joint independent component analysis (jICA), multimodal canonical correlation analysis (mCCA), CCA on blind source separation (sCCA) and partial least squares (PLS); 2) two semi-blind methods: parallel ICA and coefficient-constrained ICA (CC-ICA). We also propose a novel model for joint blind source separation (BSS) of two datasets using a combination of sCCA and jICA, i.e., 'CCA+ICA', which, compared with other joint BSS methods, can achieve higher decomposition accuracy as well as the correct automatic source link. Applications of the proposed model to real multitask fMRI data are compared to joint ICA and mCCA; CCA+ICA further shows its advantages in capturing both shared and distinct information, differentiating groups, and interpreting duration of illness in schizophrenia patients, hence promising applicability to a wide variety of medical imaging problems.

  7. Using unmanned aerial vehicle (UAV) surveys and image analysis in the study of large surface-associated marine species: a case study on reef sharks Carcharhinus melanopterus shoaling behaviour.

    PubMed

    Rieucau, G; Kiszka, J J; Castillo, J C; Mourier, J; Boswell, K M; Heithaus, M R

    2018-06-01

    A novel image analysis-based technique applied to unmanned aerial vehicle (UAV) survey data is described to detect and locate individual free-ranging sharks within aggregations. The method allows rapid collection of data and quantification of fine-scale swimming and collective patterns of sharks. We demonstrate the usefulness of this technique in a small-scale case study exploring the shoaling tendencies of blacktip reef sharks Carcharhinus melanopterus in a large lagoon within Moorea, French Polynesia. Using our approach, we found that C. melanopterus displayed increased alignment with shoal companions when distributed over a sandflat where they are regularly fed for ecotourism purposes as compared with when they shoaled in a deeper adjacent channel. Our case study highlights the potential of a relatively low-cost method that combines UAV survey data and image analysis to detect differences in shoaling patterns of free-ranging sharks in shallow habitats. This approach offers an alternative to current techniques commonly used in controlled settings that require time-consuming post-processing effort. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. TU-AB-202-06: Quantitative Evaluation of Deformable Image Registration in MRI-Guided Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mooney, K; Zhao, T; Green, O

    Purpose: To assess the performance of the deformable image registration algorithm used for MRI-guided adaptive radiation therapy using image feature analysis. Methods: MR images were collected from five patients treated on the MRIdian (ViewRay, Inc., Oakwood Village, OH), a three head Cobalt-60 therapy machine with an 0.35 T MR system. The images were acquired immediately prior to treatment with a uniform 1.5 mm resolution. Treatment sites were as follows: head/neck, lung, breast, stomach, and bladder. Deformable image registration was performed using the ViewRay software between the first fraction MRI and the final fraction MRI, and the DICE similarity coefficient (DSC)more » for the skin contours was reported. The SIFT and Harris feature detection and matching algorithms identified point features in each image separately, then found matching features in the other image. The target registration error (TRE) was defined as the vector distance between matched features on the two image sets. Each deformation was evaluated based on comparison of average TRE and DSC. Results: Image feature analysis produced between 2000–9500 points for evaluation on the patient images. The average (± standard deviation) TRE for all patients was 3.3 mm (±3.1 mm), and the passing rate of TRE<3 mm was 60% on the images. The head/neck patient had the best average TRE (1.9 mm±2.3 mm) and the best passing rate (80%). The lung patient had the worst average TRE (4.8 mm±3.3 mm) and the worst passing rate (37.2%). DSC was not significantly correlated with either TRE (p=0.63) or passing rate (p=0.55). Conclusions: Feature matching provides a quantitative assessment of deformable image registration, with a large number of data points for analysis. The TRE of matched features can be used to evaluate the registration of many objects throughout the volume, whereas DSC mainly provides a measure of gross overlap. We have a research agreement with ViewRay Inc.« less

  9. ADC texture—An imaging biomarker for high-grade glioma?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brynolfsson, Patrik; Hauksson, Jón; Karlsson, Mikael

    2014-10-15

    Purpose: Survival for high-grade gliomas is poor, at least partly explained by intratumoral heterogeneity contributing to treatment resistance. Radiological evaluation of treatment response is in most cases limited to assessment of tumor size months after the initiation of therapy. Diffusion-weighted magnetic resonance imaging (MRI) and its estimate of the apparent diffusion coefficient (ADC) has been widely investigated, as it reflects tumor cellularity and proliferation. The aim of this study was to investigate texture analysis of ADC images in conjunction with multivariate image analysis as a means for identification of pretreatment imaging biomarkers. Methods: Twenty-three consecutive high-grade glioma patients were treatedmore » with radiotherapy (2 Gy/60 Gy) with concomitant and adjuvant temozolomide. ADC maps and T1-weighted anatomical images with and without contrast enhancement were collected prior to treatment, and (residual) tumor contrast enhancement was delineated. A gray-level co-occurrence matrix analysis was performed on the ADC maps in a cuboid encapsulating the tumor in coronal, sagittal, and transversal planes, giving a total of 60 textural descriptors for each tumor. In addition, similar examinations and analyses were performed at day 1, week 2, and week 6 into treatment. Principal component analysis (PCA) was applied to reduce dimensionality of the data, and the five largest components (scores) were used in subsequent analyses. MRI assessment three months after completion of radiochemotherapy was used for classifying tumor progression or regression. Results: The score scatter plots revealed that the first, third, and fifth components of the pretreatment examinations exhibited a pattern that strongly correlated to survival. Two groups could be identified: one with a median survival after diagnosis of 1099 days and one with 345 days, p = 0.0001. Conclusions: By combining PCA and texture analysis, ADC texture characteristics were identified, which seems to hold pretreatment prognostic information, independent of known prognostic factors such as age, stage, and surgical procedure. These findings encourage further studies with a larger patient cohort.« less

  10. OSIRIS-REx Asteroid Sample Return Mission Image Analysis

    NASA Astrophysics Data System (ADS)

    Chevres Fernandez, Lee Roger; Bos, Brent

    2018-01-01

    NASA’s Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission constitutes the “first-of-its-kind” project to thoroughly characterize a near-Earth asteroid. The selected asteroid is (101955) 1999 RQ36 (a.k.a. Bennu). The mission launched in September 2016, and the spacecraft will reach its asteroid target in 2018 and return a sample to Earth in 2023. The spacecraft that will travel to, and collect a sample from, Bennu has five integrated instruments from national and international partners. NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch-And-Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample and document asteroid sample stowage. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Analysis of spacecraft imagery acquired by the TAGCAMS during cruise to the target asteroid Bennu was performed using custom codes developed in MATLAB. Assessment of the TAGCAMS in-flight performance using flight imagery was done to characterize camera performance. One specific area of investigation that was targeted was bad pixel mapping. A recent phase of the mission, known as the Earth Gravity Assist (EGA) maneuver, provided images that were used for the detection and confirmation of “questionable” pixels, possibly under responsive, using image segmentation analysis. Ongoing work on point spread function morphology and camera linearity and responsivity will also be used for calibration purposes and further analysis in preparation for proximity operations around Bennu. Said analyses will provide a broader understanding regarding the functionality of the camera system, which will in turn aid in the fly-down to the asteroid, as it will allow the pick of a suitable landing and sample location.

  11. Visual System Involvement in Patients with Newly Diagnosed Parkinson Disease.

    PubMed

    Arrigo, Alessandro; Calamuneri, Alessandro; Milardi, Demetrio; Mormina, Enricomaria; Rania, Laura; Postorino, Elisa; Marino, Silvia; Di Lorenzo, Giuseppe; Anastasi, Giuseppe Pio; Ghilardi, Maria Felice; Aragona, Pasquale; Quartarone, Angelo; Gaeta, Michele

    2017-12-01

    Purpose To assess intracranial visual system changes of newly diagnosed Parkinson disease in drug-naïve patients. Materials and Methods Twenty patients with newly diagnosed Parkinson disease and 20 age-matched control subjects were recruited. Magnetic resonance (MR) imaging (T1-weighted and diffusion-weighted imaging) was performed with a 3-T MR imager. White matter changes were assessed by exploring a white matter diffusion profile by means of diffusion-tensor imaging-based parameters and constrained spherical deconvolution-based connectivity analysis and by means of white matter voxel-based morphometry (VBM). Alterations in occipital gray matter were investigated by means of gray matter VBM. Morphologic analysis of the optic chiasm was based on manual measurement of regions of interest. Statistical testing included analysis of variance, t tests, and permutation tests. Results In the patients with Parkinson disease, significant alterations were found in optic radiation connectivity distribution, with decreased lateral geniculate nucleus V2 density (F, -8.28; P < .05), a significant increase in optic radiation mean diffusivity (F, 7.5; P = .014), and a significant reduction in white matter concentration. VBM analysis also showed a significant reduction in visual cortical volumes (P < .05). Moreover, the chiasmatic area and volume were significantly reduced (P < .05). Conclusion The findings show that visual system alterations can be detected in early stages of Parkinson disease and that the entire intracranial visual system can be involved. © RSNA, 2017 Online supplemental material is available for this article.

  12. Stability, Visibility, and Histologic Analysis of a New Implanted Fiducial for Use as a Kilovoltage Radiographic or Radioactive Marker for Patient Positioning and Monitoring in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neustadter, David, E-mail: david.n@navotek.co; Tune, Michal; Zaretsky, Asaph

    Purpose: To analyze the stability, visibility, and histology of a novel implantable soft-tissue marker (nonradioactive and radioactive) implanted in dog prostate and rabbit liver. Methods and Materials: A total of 34 nonradioactive and 35 radioactive markers were implanted in 1 dog and 16 rabbits. Stability was assessed by measuring intermarker distance (IMD) variation relative to IMDs at implantation. The IMDs were measured weekly for 4 months in the dog and biweekly for 2-4 weeks in the rabbits. Ultrasound and X-ray imaging were performed on all subjects. Computed tomography and MRI were performed on the dog. Histologic analysis was performed onmore » the rabbits after 2 or 4 months. Results: A total of 139 measurements had a mean ({+-} SD) absolute IMD variation of 1.1 {+-} 1.1 mm. These IMD variations are consistent with those reported in the literature as due to random organ deformation. The markers were visible, identifiable, and induced minimal or no image artifacts in all tested imaging modalities. Histologic analysis revealed that all pathologic changes were highly localized and not expected to be clinically significant. Conclusions: The markers were stable from the time of implantation. The markers were found to be compatible with all common medical imaging modalities. The markers caused no significant histologic effects. With respect to marker stability, visibility, and histologic analysis these implanted fiducials are appropriate for soft-tissue target positioning in radiotherapy.« less

  13. Analysis of security of optical encryption with spatially incoherent illumination technique

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Shifrina, Anna V.

    2017-03-01

    Applications of optical methods for encryption purposes have been attracting interest of researchers for decades. The first and the most popular is double random phase encoding (DRPE) technique. There are many optical encryption techniques based on DRPE. Main advantage of DRPE based techniques is high security due to transformation of spectrum of image to be encrypted into white spectrum via use of first phase random mask which allows for encrypted images with white spectra. Downsides are necessity of using holographic registration scheme in order to register not only light intensity distribution but also its phase distribution, and speckle noise occurring due to coherent illumination. Elimination of these disadvantages is possible via usage of incoherent illumination instead of coherent one. In this case, phase registration no longer matters, which means that there is no need for holographic setup, and speckle noise is gone. This technique does not have drawbacks inherent to coherent methods, however, as only light intensity distribution is considered, mean value of image to be encrypted is always above zero which leads to intensive zero spatial frequency peak in image spectrum. Consequently, in case of spatially incoherent illumination, image spectrum, as well as encryption key spectrum, cannot be white. This might be used to crack encryption system. If encryption key is very sparse, encrypted image might contain parts or even whole unhidden original image. Therefore, in this paper analysis of security of optical encryption with spatially incoherent illumination depending on encryption key size and density is conducted.

  14. Investigation of image enhancement techniques for the development of a self-contained airborne radar navigation system

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.; Karmali, M. S.

    1983-01-01

    This study was devoted to an investigation of the feasibility of applying advanced image processing techniques to enhance radar image characteristics that are pertinent to the pilot's navigation and guidance task. Millimeter (95 GHz) wave radar images for the overwater (i.e., offshore oil rigs) and overland (Heliport) scenario were used as a data base. The purpose of the study was to determine the applicability of image enhancement and scene analysis algorithms to detect and improve target characteristics (i.e., manmade objects such as buildings, parking lots, cars, roads, helicopters, towers, landing pads, etc.) that would be helpful to the pilot in determining his own position/orientation with respect to the outside world and assist him in the navigation task. Results of this study show that significant improvements in the raw radar image may be obtained using two dimensional image processing algorithms. In the overwater case, it is possible to remove the ocean clutter by thresholding the image data, and furthermore to extract the target boundary as well as the tower and catwalk locations using noise cleaning (e.g., median filter) and edge detection (e.g., Sobel operator) algorithms.

  15. Combined imaging and chemical sensing using a single optical imaging fiber.

    PubMed

    Bronk, K S; Michael, K L; Pantano, P; Walt, D R

    1995-09-01

    Despite many innovations and developments in the field of fiber-optic chemical sensors, optical fibers have not been employed to both view a sample and concurrently detect an analyte of interest. While chemical sensors employing a single optical fiber or a noncoherent fiberoptic bundle have been applied to a wide variety of analytical determinations, they cannot be used for imaging. Similarly, coherent imaging fibers have been employed only for their originally intended purpose, image transmission. We herein report a new technique for viewing a sample and measuring surface chemical concentrations that employs a coherent imaging fiber. The method is based on the deposition of a thin, analyte-sensitive polymer layer on the distal surface of a 350-microns-diameter imaging fiber. We present results from a pH sensor array and an acetylcholine biosensor array, each of which contains approximately 6000 optical sensors. The acetylcholine biosensor has a detection limit of 35 microM and a fast (< 1 s) response time. In association with an epifluorescence microscope and a charge-coupled device, these modified imaging fibers can display visual information of a remote sample with 4-microns spatial resolution, allowing for alternating acquisition of both chemical analysis and visual histology.

  16. Update on imaging techniques in oculoplastics

    PubMed Central

    Cetinkaya, Altug

    2012-01-01

    Imaging is a beneficial aid to the oculoplastic surgeon especially in orbital and lacrimal disorders when the pathology is not visible from outside. It is a powerful tool that may be benefited in not only diagnosis but also management and follow-up. The most common imaging modalities required are CT and MRI, with CT being more frequently ordered by oculoplastic surgeons. Improvements in technology enabled the acquisition times to shorten incredibly. Radiologists can now obtain images with superb resolution, and isolate the site and tissue of interest from other structures with special techniques. Better contrast agents and 3D imaging capabilities make complicated cases easier to identify. Color Doppler imaging is becoming more popular both for research and clinical purposes. Magnetic resonance angiography (MRA) added so much to the vascular system imaging recently. Although angiography is still the gold standard, new software and techniques rendered MRA as valuable as angiography in most circumstances. Stereotactic navigation, although in use for a long time, recently became the focus of interest for the oculoplastic surgeon especially in orbital decompressions. Improvements in radiology and nuclear medicine techniques of lacrimal drainage system imaging provided more detailed analysis of the system. PMID:23961020

  17. Spatio-Chromatic Adaptation via Higher-Order Canonical Correlation Analysis of Natural Images

    PubMed Central

    Gutmann, Michael U.; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús

    2014-01-01

    Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation. PMID:24533049

  18. Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.

    PubMed

    Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús

    2014-01-01

    Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.

  19. Heterogeneity of Glucose Metabolism in Esophageal Cancer Measured by Fractal Analysis of Fluorodeoxyglucose Positron Emission Tomography Image: Correlation between Metabolic Heterogeneity and Survival.

    PubMed

    Tochigi, Toru; Shuto, Kiyohiko; Kono, Tsuguaki; Ohira, Gaku; Tohma, Takayuki; Gunji, Hisashi; Hayano, Koichi; Narushima, Kazuo; Fujishiro, Takeshi; Hanaoka, Toshiharu; Akutsu, Yasunori; Okazumi, Shinichi; Matsubara, Hisahiro

    2017-01-01

    Intratumoral heterogeneity is a well-recognized characteristic feature of cancer. The purpose of this study is to assess the heterogeneity of the intratumoral glucose metabolism using fractal analysis, and evaluate its prognostic value in patients with esophageal squamous cell carcinoma (ESCC). 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) studies of 79 patients who received curative surgery were evaluated. FDG-PET images were analyzed using fractal analysis software, where differential box-counting method was employed to calculate the fractal dimension (FD) of the tumor lesion. Maximum standardized uptake value (SUVmax) and FD were compared with overall survival (OS). The median SUVmax and FD of ESCCs in this cohort were 13.8 and 1.95, respectively. In univariate analysis performed using Cox's proportional hazard model, T stage and FD showed significant associations with OS (p = 0.04, p < 0.0001, respectively), while SUVmax did not (p = 0.1). In Kaplan-Meier analysis, the low FD tumor (<1.95) showed a significant association with favorable OS (p < 0.0001). In wthe multivariate analysis among TNM staging, serum tumor markers, FD, and SUVmax, the FD was identified as the only independent prognostic factor for OS (p = 0.0006; hazards ratio 0.251, 95% CI 0.104-0.562). Metabolic heterogeneity measured by fractal analysis can be a novel imaging biomarker for survival in patients with ESCC. © 2016 S. Karger AG, Basel.

  20. Automated Cross-Sectional Measurement Method of Intracranial Dural Venous Sinuses.

    PubMed

    Lublinsky, S; Friedman, A; Kesler, A; Zur, D; Anconina, R; Shelef, I

    2016-03-01

    MRV is an important blood vessel imaging and diagnostic tool for the evaluation of stenosis, occlusions, or aneurysms. However, an accurate image-processing tool for vessel comparison is unavailable. The purpose of this study was to develop and test an automated technique for vessel cross-sectional analysis. An algorithm for vessel cross-sectional analysis was developed that included 7 main steps: 1) image registration, 2) masking, 3) segmentation, 4) skeletonization, 5) cross-sectional planes, 6) clustering, and 7) cross-sectional analysis. Phantom models were used to validate the technique. The method was also tested on a control subject and a patient with idiopathic intracranial hypertension (4 large sinuses tested: right and left transverse sinuses, superior sagittal sinus, and straight sinus). The cross-sectional area and shape measurements were evaluated before and after lumbar puncture in patients with idiopathic intracranial hypertension. The vessel-analysis algorithm had a high degree of stability with <3% of cross-sections manually corrected. All investigated principal cranial blood sinuses had a significant cross-sectional area increase after lumbar puncture (P ≤ .05). The average triangularity of the transverse sinuses was increased, and the mean circularity of the sinuses was decreased by 6% ± 12% after lumbar puncture. Comparison of phantom and real data showed that all computed errors were <1 voxel unit, which confirmed that the method provided a very accurate solution. In this article, we present a novel automated imaging method for cross-sectional vessels analysis. The method can provide an efficient quantitative detection of abnormalities in the dural sinuses. © 2016 by American Journal of Neuroradiology.

  1. SU-E-J-42: Evaluation of Fiducial Markers for Ultrasound and X-Ray Images Used for Motion Tracking in Pancreas SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, SK; Armour, E; Su, L

    Purpose Ultrasound tracking of target motion relies on visibility of vascular and/or anatomical landmark. However this is challenging when the target is located far from vascular structures or in organs that lack ultrasound landmark structure, such as in the case of pancreas cancer. The purpose of this study is to evaluate visibility, artifacts and distortions of fusion coils and solid gold markers in ultrasound, CT, CBCT and kV images to identify markers suitable for real-time ultrasound tracking of tumor motion in SBRT pancreas treatment. Methods Two fusion coils (1mm × 5mm and 1mm × 10 mm) and a solid goldmore » marker (0.8mm × 10mm) were embedded in a tissue–like ultrasound phantom. The phantom (5cm × 12cm × 20cm) was prepared using water, gelatin and psyllium-hydrophilic-mucilloid fiber. Psylliumhydrophilic mucilloid acts as scattering medium to produce echo texture that simulates sonographic appearance of human tissue in ultrasound images while maintaining electron density close to that of water in CT images. Ultrasound images were acquired using 3D-ultrasound system with markers embedded at 5, 10 and 15mm depth from phantom surface. CT images were acquired using Philips Big Bore CT while CBCT and kV images were acquired with XVI-system (Elexta). Visual analysis was performed to compare visibility of the markers and visibility score (1 to 3) were assigned. Results All markers embedded at various depths are clearly visible (score of 3) in ultrasound images. Good visibility of all markers is observed in CT, CBCT and kV images. The degree of artifact produced by the markers in CT and CBCT images are indistinguishable. No distortion is observed in images from any modalities. Conclusion All markers are visible in images across all modalities in this homogenous tissue-like phantom. Human subject data is necessary to confirm the marker type suitable for real-time ultrasound tracking of tumor motion in SBRT pancreas treatment.« less

  2. Hospital integrated parallel cluster for fast and cost-efficient image analysis: clinical experience and research evaluation

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Hoppe, Martin; Jansen, Christian; Schmidt, Thomas; Thron, Armin; Oberschelp, Walter

    2001-08-01

    In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.

  3. Incongruity of imaging using fluorescent 2-DG conjugates compared to 18F-FDG in preclinical cancer models.

    PubMed

    Tseng, Jen-Chieh; Wang, Yuchuan; Banerjee, Pallab; Kung, Andrew L

    2012-10-01

    We compared the use of near-infrared conjugates of 2-deoxyglucose (NIR 2-DG) to 2-deoxy-2-[18F]fluoro-d-glucose (18F-FDG) for the purposes of imaging tumors, as well as response to therapy. Uptake of both 18F-FDG and NIR 2-DG within gastrointestinal stromal tumor xenografts were imaged before and after nilotinib treatment. Confocal microscopy was performed to determine NIR 2-DG distribution in tumors. Treatment with nilotinib resulted in a rapid reduction in 18F-FDG uptake and reduced tumor cell viability which was predictive of long-term antitumor efficacy. In contrast, optical imaging with NIR 2-DG probes was unable to differentiate control from niltonib-treated animals, and microscopic analysis revealed no change in probe distribution as a result of treatment. These results suggest that conjugation of large bulky fluorophores to 2-DG disrupts the facilitated transport and retention of these probes in cells. Therefore, optical imaging of NIR 2-DG probes cannot substitute for 18F-FDG positron emission tomography imaging as a biomarker of tumor cell viability and metabolism.

  4. A novel computer-assisted image analysis of [123I]β-CIT SPECT images improves the diagnostic accuracy of parkinsonian disorders.

    PubMed

    Goebel, Georg; Seppi, Klaus; Donnemiller, Eveline; Warwitz, Boris; Wenning, Gregor K; Virgolini, Irene; Poewe, Werner; Scherfler, Christoph

    2011-04-01

    The purpose of this study was to develop an observer-independent algorithm for the correct classification of dopamine transporter SPECT images as Parkinson's disease (PD), multiple system atrophy parkinson variant (MSA-P), progressive supranuclear palsy (PSP) or normal. A total of 60 subjects with clinically probable PD (n = 15), MSA-P (n = 15) and PSP (n = 15), and 15 age-matched healthy volunteers, were studied with the dopamine transporter ligand [(123)I]β-CIT. Parametric images of the specific-to-nondisplaceable equilibrium partition coefficient (BP(ND)) were generated. Following a voxel-wise ANOVA, cut-off values were calculated from the voxel values of the resulting six post-hoc t-test maps. The percentages of the volume of an individual BP(ND) image remaining below and above the cut-off values were determined. The higher percentage of image volume from all six cut-off matrices was used to classify an individual's image. For validation, the algorithm was compared to a conventional region of interest analysis. The predictive diagnostic accuracy of the algorithm in the correct assignment of a [(123)I]β-CIT SPECT image was 83.3% and increased to 93.3% on merging the MSA-P and PSP groups. In contrast the multinomial logistic regression of mean region of interest values of the caudate, putamen and midbrain revealed a diagnostic accuracy of 71.7%. In contrast to a rater-driven approach, this novel method was superior in classifying [(123)I]β-CIT-SPECT images as one of four diagnostic entities. In combination with the investigator-driven visual assessment of SPECT images, this clinical decision support tool would help to improve the diagnostic yield of [(123)I]β-CIT SPECT in patients presenting with parkinsonism at their initial visit.

  5. iPhone 4s and iPhone 5s Imaging of the Eye.

    PubMed

    Jalil, Maaz; Ferenczy, Sandor R; Shields, Carol L

    2017-01-01

    To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable.

  6. Spectral analysis for automated exploration and sample acquisition

    NASA Technical Reports Server (NTRS)

    Eberlein, Susan; Yates, Gigi

    1992-01-01

    Future space exploration missions will rely heavily on the use of complex instrument data for determining the geologic, chemical, and elemental character of planetary surfaces. One important instrument is the imaging spectrometer, which collects complete images in multiple discrete wavelengths in the visible and infrared regions of the spectrum. Extensive computational effort is required to extract information from such high-dimensional data. A hierarchical classification scheme allows multispectral data to be analyzed for purposes of mineral classification while limiting the overall computational requirements. The hierarchical classifier exploits the tunability of a new type of imaging spectrometer which is based on an acousto-optic tunable filter. This spectrometer collects a complete image in each wavelength passband without spatial scanning. It may be programmed to scan through a range of wavelengths or to collect only specific bands for data analysis. Spectral classification activities employ artificial neural networks, trained to recognize a number of mineral classes. Analysis of the trained networks has proven useful in determining which subsets of spectral bands should be employed at each step of the hierarchical classifier. The network classifiers are capable of recognizing all mineral types which were included in the training set. In addition, the major components of many mineral mixtures can also be recognized. This capability may prove useful for a system designed to evaluate data in a strange environment where details of the mineral composition are not known in advance.

  7. ToF-SIMS measurements with topographic information in combined images.

    PubMed

    Koch, Sabrina; Ziegler, Georg; Hutter, Herbert

    2013-09-01

    In 2D and 3D time-of-flight secondary ion mass spectrometric (ToF-SIMS) analysis, accentuated structures on the sample surface induce distorted element distributions in the measurement. The origin of this effect is the 45° incidence angle of the analysis beam, recording planar images with distortion of the sample surface. For the generation of correct element distributions, these artifacts associated with the sample surface need to be eliminated by measuring the sample surface topography and applying suitable algorithms. For this purpose, the next generation of ToF-SIMS instruments will feature a scanning probe microscope directly implemented in the sample chamber which allows the performance of topography measurements in situ. This work presents the combination of 2D and 3D ToF-SIMS analysis with topographic measurements by ex situ techniques such as atomic force microscopy (AFM), confocal microscopy (CM), and digital holographic microscopy (DHM). The concept of the combination of topographic and ToF-SIMS measurements in a single representation was applied to organic and inorganic samples featuring surface structures in the nanometer and micrometer ranges. The correct representation of planar and distorted ToF-SIMS images was achieved by the combination of topographic data with images of 2D as well as 3D ToF-SIMS measurements, using either AFM, CM, or DHM for the recording of topographic data.

  8. Analysis of the Image of Scientists Portrayed in the Lebanese National Science Textbooks

    NASA Astrophysics Data System (ADS)

    Yacoubian, Hagop A.; Al-Khatib, Layan; Mardirossian, Taline

    2017-07-01

    This article presents an analysis of how scientists are portrayed in the Lebanese national science textbooks. The purpose of this study was twofold. First, to develop a comprehensive analytical framework that can serve as a tool to analyze the image of scientists portrayed in educational resources. Second, to analyze the image of scientists portrayed in the Lebanese national science textbooks that are used in Basic Education. An analytical framework, based on an extensive review of the relevant literature, was constructed that served as a tool for analyzing the textbooks. Based on evidence-based stereotypes, the framework focused on the individual and work-related characteristics of scientists. Fifteen science textbooks were analyzed using both quantitative and qualitative measures. Our analysis of the textbooks showed the presence of a number of stereotypical images. The scientists are predominantly white males of European descent. Non-Western scientists, including Lebanese and/or Arab scientists are mostly absent in the textbooks. In addition, the scientists are portrayed as rational individuals who work alone, who conduct experiments in their labs by following the scientific method, and by operating within Eurocentric paradigms. External factors do not influence their work. They are engaged in an enterprise which is objective, which aims for discovering the truth out there, and which involves dealing with direct evidence. Implications for science education are discussed.

  9. Early Identification of Aortic Valve Sclerosis Using Iron Oxide Enhanced MRI

    PubMed Central

    Hamilton, Amanda M.; Rogers, Kem A.; Belisle, Andre J.L.; Ronald, John A.; Rutt, Brian K.; Weissleder, Ralph; Boughner, Derek R.

    2017-01-01

    Purpose To test the ability of MION-47 enhanced MRI to identify tissue macrophage infiltration in a rabbit model of aortic valve sclerosis (AVS). Materials and Methods The aortic valves of control and cholesterol-fed New Zealand White rabbits were imaged in vivo pre- and 48 h post-intravenous administration of MION-47 using a 1.5 Tesla (T) MR clinical scanner and a CINE fSPGR sequence. MION-47 aortic valve cusps were imaged ex vivo on a 3.0T whole-body MR system with a custom gradient insert coil and a three-dimensional (3D) FIESTA sequence and compared with aortic valve cusps from control and cholesterol-fed contrast-free rabbits. Histopathological analysis was performed to determine the site of iron oxide uptake. Results MION-47 enhanced the visibility of both control and cholesterol-fed rabbit valves in in vivo images. Ex vivo image analysis confirmed the presence of significant signal voids in contrast-administered aortic valves. Signal voids were not observed in contrast-free valve cusps. In MION-47 administered rabbits, histopathological analysis revealed iron staining not only in fibrosal macrophages of cholesterol-fed valves but also in myofibroblasts from control and cholesterol-fed valves. Conclusion Although iron oxide labeling of macrophage infiltration in AVS has the potential to detect the disease process early, a macrophage-specific iron compound rather than passive targeting may be required. PMID:20027578

  10. In vivo optical coherence tomography imaging of dissolution of hyaluronic acid microneedles in human skin (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Song, Seungri; Kim, Jung Dong; Bae, Jung-hyun; Chang, Sooho; Kim, Soocheol; Lee, Hyungsuk; Jeong, Dohyeon; Kim, Hong Kee; Joo, Chulmin

    2017-02-01

    Transdermal drug delivery (TDD) has been recently highlighted as an alternative to oral delivery and hypodermic injections. Among many methods, drug delivery using a microneedle (MN) is one of the promising administration strategies due to its high skin permeability, mininal invasiveness, and ease of injection. In addition, microneedle-based TDD is explored for cosmetic and therapeutic purposes, rapidly developing market of microneedle industry for general population. To date, visualization of microneedles inserted into biological tissue has primarily been performed ex vivo. MRI, CT and ultrasound imaging do not provide sufficient spatial resolution, and optical microscopy is not suitable because of their limited imaging depth; structure of microneedles located in 0.2 1mm into the skin cannot be visulalized. Optical coherence tomography (OCT) is a non-invasive, cross-sectional optical imaging modality for biological tissue with high spatial resolution and acquisition speed. Compared with ultrasound imaging, it exhibits superior spatial resolution (1 10 um) and high sensitivity, while providing an imaging depth of biological tissue down to 1 2 mm. Here, we present in situ imaging and analysis of the penetration and dissolution characteristics of hyaluronic acid based MNs (HA-MN) with various needle heights in human skin in vivo. In contrast to other studies, we measured the actual penetration depths of the HA-MNs by considering the experimentally measured refractive index of HA in the solid state. For the dissolution dynamics of the HA-MNs, time-lapse structural alteration of the MNs could be clearly visualized, and the volumetric changes of the MNs were measured with an image analysis algorithm.

  11. Real-time Supervised Detection of Pink Areas in Dermoscopic Images of Melanoma: Importance of Color Shades, Texture and Location

    PubMed Central

    Kaur, Ravneet; Albano, Peter P.; Cole, Justin G.; Hagerty, Jason; LeAnder, Robert W.; Moss, Randy H.; Stoecker, William V.

    2015-01-01

    Background/Purpose Early detection of malignant melanoma is an important public health challenge. In the USA, dermatologists are seeing more melanomas at an early stage, before classic melanoma features have become apparent. Pink color is a feature of these early melanomas. If rapid and accurate automatic detection of pink color in these melanomas could be accomplished, there could be significant public health benefits. Methods Detection of three shades of pink (light pink, dark pink, and orange pink) was accomplished using color analysis techniques in five color planes (red, green, blue, hue and saturation). Color shade analysis was performed using a logistic regression model trained with an image set of 60 dermoscopic images of melanoma that contained pink areas. Detected pink shade areas were further analyzed with regard to the location within the lesion, average color parameters over the detected areas, and histogram texture features. Results Logistic regression analysis of a separate set of 128 melanomas and 128 benign images resulted in up to 87.9% accuracy in discriminating melanoma from benign lesions measured using area under the receiver operating characteristic curve. The accuracy in this model decreased when parameters for individual shades, texture, or shade location within the lesion were omitted. Conclusion Texture, color, and lesion location analysis applied to multiple shades of pink can assist in melanoma detection. When any of these three details: color location, shade analysis, or texture analysis were omitted from the model, accuracy in separating melanoma from benign lesions was lowered. Separation of colors into shades and further details that enhance the characterization of these color shades are needed for optimal discrimination of melanoma from benign lesions. PMID:25809473

  12. MRI artifact reduction and quality improvement in the upper abdomen with PROPELLER and prospective acquisition correction (PACE) technique.

    PubMed

    Hirokawa, Yuusuke; Isoda, Hiroyoshi; Maetani, Yoji S; Arizono, Shigeki; Shimada, Kotaro; Togashi, Kaori

    2008-10-01

    The purpose of this study was to evaluate the effectiveness of the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER [BLADE in the MR systems from Siemens Medical Solutions]) with a respiratory compensation technique for motion correction, image noise reduction, improved sharpness of liver edge, and image quality of the upper abdomen. Twenty healthy adult volunteers with a mean age of 28 years (age range, 23-42 years) underwent upper abdominal MRI with a 1.5-T scanner. For each subject, fat-saturated T2-weighted turbo spin-echo (TSE) sequences with respiratory compensation (prospective acquisition correction [PACE]) were performed with and without the BLADE technique. Ghosting artifact, artifacts except ghosting artifact such as respiratory motion and bowel movement, sharpness of liver edge, image noise, and overall image quality were evaluated visually by three radiologists using a 5-point scale for qualitative analysis. The Wilcoxon's signed rank test was used to determine whether a significant difference existed between images with and without BLADE. A p value less than 0.05 was considered to be statistically significant. In the BLADE images, image artifacts, sharpness of liver edge, image noise, and overall image quality were significantly improved (p < 0.001). With the BLADE technique, T2-weighted TSE images of the upper abdomen could provide reduced image artifacts including ghosting artifact and image noise and provide better image quality.

  13. Forensic Analysis of the Sony Playstation Portable

    NASA Astrophysics Data System (ADS)

    Conrad, Scott; Rodriguez, Carlos; Marberry, Chris; Craiger, Philip

    The Sony PlayStation Portable (PSP) is a popular portable gaming device with features such as wireless Internet access and image, music and movie playback. As with most systems built around a processor and storage, the PSP can be used for purposes other than it was originally intended - legal as well as illegal. This paper discusses the features of the PSP browser and suggests best practices for extracting digital evidence.

  14. Design and development of an ethnically-diverse imaging informatics-based eFolder system for multiple sclerosis patients

    PubMed Central

    Ma, Kevin C.; Fernandez, James R.; Amezcua, Lilyana; Lerner, Alex; Shiroishi, Mark S.; Liu, Brent J.

    2016-01-01

    Purpose MRI has been used to identify multiple sclerosis (MS) lesions in brain and spinal cord visually. Integrating patient information into an electronic patient record system has become key for modern patient care in medicine in recent years. Clinically, it is also necessary to track patients' progress in longitudinal studies, in order to provide comprehensive understanding of disease progression and response to treatment. As the amount of required data increases, there exists a need for an efficient systematic solution to store and analyze MS patient data, disease profiles, and disease tracking for both clinical and research purposes. Method An imaging informatics based system, called MS eFolder, has been developed as an integrated patient record system for data storage and analysis of MS patients. The eFolder system, with a DICOM-based database, includes a module for lesion contouring by radiologists, a MS lesion quantification tool to quantify MS lesion volume in 3D, brain parenchyma fraction analysis, and provide quantitative analysis and tracking of volume changes in longitudinal studies. Patient data, including MR images, have been collected retrospectively at University of Southern California Medical Center (USC) and Los Angeles County Hospital (LAC). The MS eFolder utilizes web-based components, such as browser-based graphical user interface (GUI) and web-based database. The eFolder database stores patient clinical data (demographics, MS disease history, family history, etc.), MR imaging-related data found in DICOM headers, and lesion quantification results. Lesion quantification results are derived from radiologists' contours on brain MRI studies and quantified into 3-dimensional volumes and locations. Quantified results of white matter lesions are integrated into a structured report based on DICOM-SR protocol and templates. The user interface displays patient clinical information, original MR images, and viewing structured reports of quantified results. The GUI also includes a data mining tool to handle unique search queries for MS. System workflow and dataflow steps has been designed based on the IHE post-processing workflow profile, including workflow process tracking, MS lesion contouring and quantification of MR images at a post-processing workstation, and storage of quantitative results as DICOM-SR in DICOM-based storage system. The web-based GUI is designed to display zero-footprint DICOM web-accessible data objects (WADO) and the SR objects. Summary The MS eFolder system has been designed and developed as an integrated data storage and mining solution in both clinical and research environments, while providing unique features, such as quantitative lesion analysis and disease tracking over a longitudinal study. A comprehensive image and clinical data integrated database provided by MS eFolder provides a platform for treatment assessment, outcomes analysis and decision-support. The proposed system serves as a platform for future quantitative analysis derived automatically from CAD algorithms that can also be integrated within the system for individual disease tracking and future MS-related research. Ultimately the eFolder provides a decision-support infrastructure that can eventually be used as add-on value to the overall electronic medical record. PMID:26564667

  15. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2012-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often great, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques. Technical Methodology/Approach: Apply massively parallel algorithms and data structures to the specific analysis requirements presented when working with thermographic data sets.

  16. High Precision Thermal, Structural and Optical Analysis of an External Occulter Using a Common Model and the General Purpose Multi-Physics Analysis Tool Cielo

    NASA Technical Reports Server (NTRS)

    Hoff, Claus; Cady, Eric; Chainyk, Mike; Kissil, Andrew; Levine, Marie; Moore, Greg

    2011-01-01

    The efficient simulation of multidisciplinary thermo-opto-mechanical effects in precision deployable systems has for years been limited by numerical toolsets that do not necessarily share the same finite element basis, level of mesh discretization, data formats, or compute platforms. Cielo, a general purpose integrated modeling tool funded by the Jet Propulsion Laboratory and the Exoplanet Exploration Program, addresses shortcomings in the current state of the art via features that enable the use of a single, common model for thermal, structural and optical aberration analysis, producing results of greater accuracy, without the need for results interpolation or mapping. This paper will highlight some of these advances, and will demonstrate them within the context of detailed external occulter analyses, focusing on in-plane deformations of the petal edges for both steady-state and transient conditions, with subsequent optical performance metrics including intensity distributions at the pupil and image plane.

  17. TH-E-BRF-02: 4D-CT Ventilation Image-Based IMRT Plans Are Dosimetrically Comparable to SPECT Ventilation Image-Based Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kida, S; University of Tokyo Hospital, Bunkyo, Tokyo; Bal, M

    Purpose: An emerging lung ventilation imaging method based on 4D-CT can be used in radiotherapy to selectively avoid irradiating highly-functional lung regions, which may reduce pulmonary toxicity. Efforts to validate 4DCT ventilation imaging have been focused on comparison with other imaging modalities including SPECT and xenon CT. The purpose of this study was to compare 4D-CT ventilation image-based functional IMRT plans with SPECT ventilation image-based plans as reference. Methods: 4D-CT and SPECT ventilation scans were acquired for five thoracic cancer patients in an IRB-approved prospective clinical trial. The ventilation images were created by quantitative analysis of regional volume changes (amore » surrogate for ventilation) using deformable image registration of the 4D-CT images. A pair of 4D-CT ventilation and SPECT ventilation image-based IMRT plans was created for each patient. Regional ventilation information was incorporated into lung dose-volume objectives for IMRT optimization by assigning different weights on a voxel-by-voxel basis. The objectives and constraints of the other structures in the plan were kept identical. The differences in the dose-volume metrics have been evaluated and tested by a paired t-test. SPECT ventilation was used to calculate the lung functional dose-volume metrics (i.e., mean dose, V20 and effective dose) for both 4D-CT ventilation image-based and SPECT ventilation image-based plans. Results: Overall there were no statistically significant differences in any dose-volume metrics between the 4D-CT and SPECT ventilation imagebased plans. For example, the average functional mean lung dose of the 4D-CT plans was 26.1±9.15 (Gy), which was comparable to 25.2±8.60 (Gy) of the SPECT plans (p = 0.89). For other critical organs and PTV, nonsignificant differences were found as well. Conclusion: This study has demonstrated that 4D-CT ventilation image-based functional IMRT plans are dosimetrically comparable to SPECT ventilation image-based plans, providing evidence to use 4D-CT ventilation imaging for clinical applications. Supported in part by Free to Breathe Young Investigator Research Grant and NIH/NCI R01 CA 093626. The authors thank Philips Radiation Oncology Systems for the Pinnacle3 treatment planning systems.« less

  18. Novel primer specific false terminations during DNA sequencing reactions: danger of inaccuracy of mutation analysis in molecular diagnostics

    PubMed Central

    Anwar, R; Booth, A; Churchill, A J; Markham, A F

    1996-01-01

    The determination of nucleotide sequence is fundamental to the identification and molecular analysis of genes. Direct sequencing of PCR products is now becoming a commonplace procedure for haplotype analysis, and for defining mutations and polymorphism within genes, particularly for diagnostic purposes. A previously unrecognised phenomenon, primer related variability, observed in sequence data generated using Taq cycle sequencing and T7 Sequenase sequencing, is reported. This suggests that caution is necessary when interpreting DNA sequence data. This is particularly important in situations where treatment may be dependent on the accuracy of the molecular diagnosis. Images PMID:16696096

  19. Earth resources data analysis program, phase 3

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Tasks were performed in two areas: (1) systems analysis and (2) algorithmic development. The major effort in the systems analysis task was the development of a recommended approach to the monitoring of resource utilization data for the Large Area Crop Inventory Experiment (LACIE). Other efforts included participation in various studies concerning the LACIE Project Plan, the utility of the GE Image 100, and the specifications for a special purpose processor to be used in the LACIE. In the second task, the major effort was the development of improved algorithms for estimating proportions of unclassified remotely sensed data. Also, work was performed on optimal feature extraction and optimal feature extraction for proportion estimation.

  20. Detection of myocardial ischemia by automated, motion-corrected, color-encoded perfusion maps compared with visual analysis of adenosine stress cardiovascular magnetic resonance imaging at 3 T: a pilot study.

    PubMed

    Doesch, Christina; Papavassiliu, Theano; Michaely, Henrik J; Attenberger, Ulrike I; Glielmi, Christopher; Süselbeck, Tim; Fink, Christian; Borggrefe, Martin; Schoenberg, Stefan O

    2013-09-01

    The purpose of this study was to compare automated, motion-corrected, color-encoded (AMC) perfusion maps with qualitative visual analysis of adenosine stress cardiovascular magnetic resonance imaging for detection of flow-limiting stenoses. Myocardial perfusion measurements applying the standard adenosine stress imaging protocol and a saturation-recovery temporal generalized autocalibrating partially parallel acquisition (t-GRAPPA) turbo fast low angle shot (Turbo FLASH) magnetic resonance imaging sequence were performed in 25 patients using a 3.0-T MAGNETOM Skyra (Siemens Healthcare Sector, Erlangen, Germany). Perfusion studies were analyzed using AMC perfusion maps and qualitative visual analysis. Angiographically detected coronary artery (CA) stenoses greater than 75% or 50% or more with a myocardial perfusion reserve index less than 1.5 were considered as hemodynamically relevant. Diagnostic performance and time requirement for both methods were compared. Interobserver and intraobserver reliability were also assessed. A total of 29 CA stenoses were included in the analysis. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy for detection of ischemia on a per-patient basis were comparable using the AMC perfusion maps compared to visual analysis. On a per-CA territory basis, the attribution of an ischemia to the respective vessel was facilitated using the AMC perfusion maps. Interobserver and intraobserver reliability were better for the AMC perfusion maps (concordance correlation coefficient, 0.94 and 0.93, respectively) compared to visual analysis (concordance correlation coefficient, 0.73 and 0.79, respectively). In addition, in comparison to visual analysis, the AMC perfusion maps were able to significantly reduce analysis time from 7.7 (3.1) to 3.2 (1.9) minutes (P < 0.0001). The AMC perfusion maps yielded a diagnostic performance on a per-patient and on a per-CA territory basis comparable with the visual analysis. Furthermore, this approach demonstrated higher interobserver and intraobserver reliability as well as a better time efficiency when compared to visual analysis.

  1. Room acoustics analysis using circular arrays: an experimental study based on sound field plane-wave decomposition.

    PubMed

    Torres, Ana M; Lopez, Jose J; Pueo, Basilio; Cobos, Maximo

    2013-04-01

    Plane-wave decomposition (PWD) methods using microphone arrays have been shown to be a very useful tool within the applied acoustics community for their multiple applications in room acoustics analysis and synthesis. While many theoretical aspects of PWD have been previously addressed in the literature, the practical advantages of the PWD method to assess the acoustic behavior of real rooms have been barely explored so far. In this paper, the PWD method is employed to analyze the sound field inside a selected set of real rooms having a well-defined purpose. To this end, a circular microphone array is used to capture and process a number of impulse responses at different spatial positions, providing angle-dependent data for both direct and reflected wavefronts. The detection of reflected plane waves is performed by means of image processing techniques applied over the raw array response data and over the PWD data, showing the usefulness of image-processing-based methods for room acoustics analysis.

  2. High-frequency Electrocardiogram Analysis in the Ability to Predict Reversible Perfusion Defects during Adenosine Myocardial Perfusion Imaging

    NASA Technical Reports Server (NTRS)

    Tragardh, Elin; Schlegel, Todd T.; Carlsson, Marcus; Pettersson, Jonas; Nilsson, Klas; Pahlm, Olle

    2007-01-01

    Background: A previous study has shown that analysis of high-frequency QRS components (HF-QRS) is highly sensitive and reasonably specific for detecting reversible perfusion defects on myocardial perfusion imaging (MPI) scans during adenosine. The purpose of the present study was to try to reproduce those findings. Methods: 12-lead high-resolution electrocardiogram recordings were obtained from 100 patients before (baseline) and during adenosine Tc-99m-tetrofosmin MPI tests. HF-QRS were analyzed regarding morphology and changes in root mean square (RMS) voltages from before the adenosine infusion to peak infusion. Results: The best area under the curve (AUC) was found in supine patients (AUC=0.736) in a combination of morphology and RMS changes. None of the measurements, however, were statistically better than tossing a coin (AUC=0.5). Conclusion: Analysis of HF-QRS was not significantly better than tossing a coin for determining reversible perfusion defects on MPI scans.

  3. SU-E-I-33: Initial Evaluation of Model-Based Iterative CT Reconstruction Using Standard Image Quality Phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gingold, E; Dave, J

    2014-06-01

    Purpose: The purpose of this study was to compare a new model-based iterative reconstruction with existing reconstruction methods (filtered backprojection and basic iterative reconstruction) using quantitative analysis of standard image quality phantom images. Methods: An ACR accreditation phantom (Gammex 464) and a CATPHAN600 phantom were scanned using 3 routine clinical acquisition protocols (adult axial brain, adult abdomen, and pediatric abdomen) on a Philips iCT system. Each scan was acquired using default conditions and 75%, 50% and 25% dose levels. Images were reconstructed using standard filtered backprojection (FBP), conventional iterative reconstruction (iDose4) and a prototype model-based iterative reconstruction (IMR). Phantom measurementsmore » included CT number accuracy, contrast to noise ratio (CNR), modulation transfer function (MTF), low contrast detectability (LCD), and noise power spectrum (NPS). Results: The choice of reconstruction method had no effect on CT number accuracy, or MTF (p<0.01). The CNR of a 6 HU contrast target was improved by 1–67% with iDose4 relative to FBP, while IMR improved CNR by 145–367% across all protocols and dose levels. Within each scan protocol, the CNR improvement from IMR vs FBP showed a general trend of greater improvement at lower dose levels. NPS magnitude was greatest for FBP and lowest for IMR. The NPS of the IMR reconstruction showed a pronounced decrease with increasing spatial frequency, consistent with the unusual noise texture seen in IMR images. Conclusion: Iterative Model Reconstruction reduces noise and improves contrast-to-noise ratio without sacrificing spatial resolution in CT phantom images. This offers the possibility of radiation dose reduction and improved low contrast detectability compared with filtered backprojection or conventional iterative reconstruction.« less

  4. Diagnostic accuracy at several reduced radiation dose levels for CT imaging in the diagnosis of appendicitis

    NASA Astrophysics Data System (ADS)

    Zhang, Di; Khatonabadi, Maryam; Kim, Hyun; Jude, Matilda; Zaragoza, Edward; Lee, Margaret; Patel, Maitraya; Poon, Cheryce; Douek, Michael; Andrews-Tang, Denise; Doepke, Laura; McNitt-Gray, Shawn; Cagnon, Chris; DeMarco, John; McNitt-Gray, Michael

    2012-03-01

    Purpose: While several studies have investigated the tradeoffs between radiation dose and image quality (noise) in CT imaging, the purpose of this study was to take this analysis a step further by investigating the tradeoffs between patient radiation dose (including organ dose) and diagnostic accuracy in diagnosis of appendicitis using CT. Methods: This study was IRB approved and utilized data from 20 patients who underwent clinical CT exams for indications of appendicitis. Medical record review established true diagnosis of appendicitis, with 10 positives and 10 negatives. A validated software tool used raw projection data from each scan to create simulated images at lower dose levels (70%, 50%, 30%, 20% of original). An observer study was performed with 6 radiologists reviewing each case at each dose level in random order over several sessions. Readers assessed image quality and provided confidence in their diagnosis of appendicitis, each on a 5 point scale. Liver doses at each case and each dose level were estimated using Monte Carlo simulation based methods. Results: Overall diagnostic accuracy varies across dose levels: 92%, 93%, 91%, 90% and 90% across the 100%, 70%, 50%, 30% and 20% dose levels respectively. And it is 93%, 95%, 88%, 90% and 90% across the 13.5-22mGy, 9.6-13.5mGy, 6.4-9.6mGy, 4-6.4mGy, and 2-4mGy liver dose ranges respectively. Only 4 out of 600 observations were rated "unacceptable" for image quality. Conclusion: The results from this pilot study indicate that the diagnostic accuracy does not change dramatically even at significantly reduced radiation dose.

  5. Population based MRI and DTI templates of the adult ferret brain and tools for voxelwise analysis.

    PubMed

    Hutchinson, E B; Schwerin, S C; Radomski, K L; Sadeghi, N; Jenkins, J; Komlosh, M E; Irfanoglu, M O; Juliano, S L; Pierpaoli, C

    2017-05-15

    Non-invasive imaging has the potential to play a crucial role in the characterization and translation of experimental animal models to investigate human brain development and disorders, especially when employed to study animal models that more accurately represent features of human neuroanatomy. The purpose of this study was to build and make available MRI and DTI templates and analysis tools for the ferret brain as the ferret is a well-suited species for pre-clinical MRI studies with folded cortical surface, relatively high white matter volume and body dimensions that allow imaging with pre-clinical MRI scanners. Four ferret brain templates were built in this study - in-vivo MRI and DTI and ex-vivo MRI and DTI - using brain images across many ferrets and region of interest (ROI) masks corresponding to established ferret neuroanatomy were generated by semi-automatic and manual segmentation. The templates and ROI masks were used to create a web-based ferret brain viewing software for browsing the MRI and DTI volumes with annotations based on the ROI masks. A second objective of this study was to provide a careful description of the imaging methods used for acquisition, processing, registration and template building and to demonstrate several voxelwise analysis methods including Jacobian analysis of morphometry differences between the female and male brain and bias-free identification of DTI abnormalities in an injured ferret brain. The templates, tools and methodological optimization presented in this study are intended to advance non-invasive imaging approaches for human-similar animal species that will enable the use of pre-clinical MRI studies for understanding and treating brain disorders. Published by Elsevier Inc.

  6. Velopharyngeal Anatomy in 22q11.2 Deletion Syndrome: A Three-Dimensional Cephalometric Analysis

    PubMed Central

    Ruotolo, Rachel A.; Veitia, Nestor A.; Corbin, Aaron; McDonough, Joseph; Solot, Cynthia B.; McDonald-McGinn, Donna; Zackai, Elaine H.; Emanuel, Beverly S.; Cnaan, Avital; LaRossa, Don; Arens, Raanan; Kirschner, Richard E.

    2010-01-01

    Objective 22q11.2 deletion syndrome is the most common genetic cause of velopharyngeal dysfunction (VPD). Magnetic resonance imaging (MRI) is a promising method for noninvasive, three-dimensional (3D) assessment of velopharyngeal (VP) anatomy. The purpose of this study was to assess VP structure in patients with 22q11.2 deletion syndrome by using 3D MRI analysis. Design This was a retrospective analysis of magnetic resonance images obtained in patients with VPD associated with a 22q11.2 deletion compared with a normal control group. Setting This study was conducted at The Children’s Hospital of Philadelphia, a pediatric tertiary care center. Patients, Participants The study group consisted of 5 children between the ages of 2.9 and 7.9 years, with 22q11.2 deletion syndrome confirmed by fluorescence in situ hybridization analysis. All had VPD confirmed by nasendoscopy or videofluoroscopy. The control population consisted of 123 unaffected patients who underwent MRI for reasons other than VP assessment. Interventions Axial and sagittal T1- and T2-weighted magnetic resonance images with 3-mm slice thickness were obtained from the orbit to the larynx in all patients by using a 1.5T Siemens Visions system. Outcome Measures Linear, angular, and volumetric measurements of VP structures were obtained from the magnetic resonance images with VIDA image- processing software. Results The study group demonstrated greater anterior and posterior cranial base and atlanto-dental angles. They also demonstrated greater pharyngeal cavity volume and width and lesser tonsillar and adenoid volumes. Conclusion Patients with a 22q11.2 deletion demonstrate significant alterations in VP anatomy that may contribute to VPD. PMID:16854203

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apte, A; Veeraraghavan, H; Oh, J

    Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less

  8. SU-E-J-225: CEST Imaging in Head and Neck Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J; Hwang, K; Fuller, C

    Purpose: Chemical Exchange Saturation Transfer (CEST) imaging is an MRI technique enables the detection and imaging of metabolically active compounds in vivo. It has been used to differentiate tumor types and metabolic characteristics. Unlike PET/CT,CEST imaging does not use isotopes so it can be used on patient repeatedly. This study is to report the preliminary results of CEST imaging in Head and Neck cancer (HNC) patients. Methods: A CEST imaging sequence and the post-processing software was developed on a 3T clinical MRI scanner. Ten patients with Human papilloma virus positive oropharyngeal cancer were imaged in their immobilized treatment position. Amore » 5 mm slice CEST image was acquired (128×128, FOV=20∼24cm) to encompass the maximum dimension of tumor. Twenty-nine off-set frequencies (from −7.8ppm to +7.8 ppm) were acquired to obtain the Z-spectrum. Asymmetry analysis was used to extract the CEST contrasts. ROI at the tumor, node and surrounding tissues were measured. Results: CEST images were successfully acquired and Zspectrum asymmetry analysis demonstrated clear CEST contrasts in tumor as well as the surrounding tissues. 3∼5% CEST contrast in the range of 1 to 4 ppm was noted in tumor as well as grossly involved nodes. Injection of glucose produced a marked increase of CEST contrast in tumor region (∼10%). Motion and pulsation artifacts tend to smear the CEST contrast, making the interpretation of the image contrast difficult. Field nonuniformity, pulsation in blood vesicle and susceptibility artifacts caused by air cavities were also problematic for CEST imaging. Conclusion: We have demonstrated successful CEST acquisition and Z-spectrum reconstruction on HNC patients with a clinical scanner. MRI acquisition in immobilized treatment position is critical for image quality as well as the success of CEST image acquisition. CEST images provide novel contrast of metabolites in HNC and present great potential in the pre- and post-treatment assessment of patients undergoing radiation therapy.« less

  9. Real-time deblurring of handshake blurred images on smartphones

    NASA Astrophysics Data System (ADS)

    Pourreza-Shahri, Reza; Chang, Chih-Hsiang; Kehtarnavaz, Nasser

    2015-02-01

    This paper discusses an Android app for the purpose of removing blur that is introduced as a result of handshakes when taking images via a smartphone. This algorithm utilizes two images to achieve deblurring in a computationally efficient manner without suffering from artifacts associated with deconvolution deblurring algorithms. The first image is the normal or auto-exposure image and the second image is a short-exposure image that is automatically captured immediately before or after the auto-exposure image is taken. A low rank approximation image is obtained by applying singular value decomposition to the auto-exposure image which may appear blurred due to handshakes. This approximation image does not suffer from blurring while incorporating the image brightness and contrast information. The eigenvalues extracted from the low rank approximation image are then combined with those from the shortexposure image. It is shown that this deblurring app is computationally more efficient than the adaptive tonal correction algorithm which was previously developed for the same purpose.

  10. Representation of scientific methodology in secondary science textbooks

    NASA Astrophysics Data System (ADS)

    Binns, Ian C.

    The purpose of this investigation was to assess the representation of scientific methodology in secondary science textbooks. More specifically, this study looked at how textbooks introduced scientific methodology and to what degree the examples from the rest of the textbook, the investigations, and the images were consistent with the text's description of scientific methodology, if at all. The sample included eight secondary science textbooks from two publishers, McGraw-Hill/Glencoe and Harcourt/Holt, Rinehart & Winston. Data consisted of all student text and teacher text that referred to scientific methodology. Second, all investigations in the textbooks were analyzed. Finally, any images that depicted scientists working were also collected and analyzed. The text analysis and activity analysis used the ethnographic content analysis approach developed by Altheide (1996). The rubrics used for the text analysis and activity analysis were initially guided by the Benchmarks (AAAS, 1993), the NSES (NRC, 1996), and the nature of science literature. Preliminary analyses helped to refine each of the rubrics and grounded them in the data. Image analysis used stereotypes identified in the DAST literature. Findings indicated that all eight textbooks presented mixed views of scientific methodology in their initial descriptions. Five textbooks placed more emphasis on the traditional view and three placed more emphasis on the broad view. Results also revealed that the initial descriptions, examples, investigations, and images all emphasized the broad view for Glencoe Biology and the traditional view for Chemistry: Matter and Change. The initial descriptions, examples, investigations, and images in the other six textbooks were not consistent. Overall, the textbook with the most appropriate depiction of scientific methodology was Glencoe Biology and the textbook with the least appropriate depiction of scientific methodology was Physics: Principles and Problems. These findings suggest that compared to earlier investigations, textbooks have begun to improve in how they represent scientific methodology. However, there is still much room for improvement. Future research needs to consider how textbooks impact teachers' and students' understandings of scientific methodology.

  11. Analytical validation of quantitative immunohistochemical assays of tumor infiltrating lymphocyte biomarkers.

    PubMed

    Singh, U; Cui, Y; Dimaano, N; Mehta, S; Pruitt, S K; Yearley, J; Laterza, O F; Juco, J W; Dogdas, B

    2018-06-04

    Tumor infiltrating lymphocytes (TIL), especially T-cells, have both prognostic and therapeutic applications. The presence of CD8+ effector T-cells and the ratio of CD8+ cells to FOXP3+ regulatory T-cells have been used as biomarkers of disease prognosis to predict response to various immunotherapies. Blocking the interaction between inhibitory receptors on T-cells and their ligands with therapeutic antibodies including atezolizumab, nivolumab, pembrolizumab and tremelimumab increases the immune response against cancer cells and has shown significant improvement in clinical benefits and survival in several different tumor types. The improved clinical outcome is presumed to be associated with a higher tumor infiltration; therefore, it is thought that more accurate methods for measuring the amount of TIL could assist prognosis and predict treatment response. We have developed and validated quantitative immunohistochemistry (IHC) assays for CD3, CD8 and FOXP3 for immunophenotyping T-lymphocytes in tumor tissue. Various types of formalin fixed, paraffin embedded (FFPE) tumor tissues were immunolabeled with anti-CD3, anti-CD8 and anti-FOXP3 antibodies using an IHC autostainer. The tumor area of stained tissues, including the invasive margin of the tumor, was scored by a pathologist (visual scoring) and by computer-based quantitative image analysis. Two image analysis scores were obtained for the staining of each biomarker: the percent positive cells in the tumor area and positive cells/mm 2 tumor area. Comparison of visual vs. image analysis scoring methods using regression analysis showed high correlation and indicated that quantitative image analysis can be used to score the number of positive cells in IHC stained slides. To demonstrate that the IHC assays produce consistent results in normal daily testing, we evaluated the specificity, sensitivity and reproducibility of the IHC assays using both visual and image analysis scoring methods. We found that CD3, CD8 and FOXP3 IHC assays met the fit-for-purpose analytical acceptance validation criteria and that they can be used to support clinical studies.

  12. Towards the use of computationally inserted lesions for mammographic CAD assessment

    NASA Astrophysics Data System (ADS)

    Ghanian, Zahra; Pezeshk, Aria; Petrick, Nicholas; Sahiner, Berkman

    2018-03-01

    Computer-aided detection (CADe) devices used for breast cancer detection on mammograms are typically first developed and assessed for a specific "original" acquisition system, e.g., a specific image detector. When CADe developers are ready to apply their CADe device to a new mammographic acquisition system, they typically assess the CADe device with images acquired using the new system. Collecting large repositories of clinical images containing verified cancer locations and acquired by the new image acquisition system is costly and time consuming. Our goal is to develop a methodology to reduce the clinical data burden in the assessment of a CADe device for use with a different image acquisition system. We are developing an image blending technique that allows users to seamlessly insert lesions imaged using an original acquisition system into normal images or regions acquired with a new system. In this study, we investigated the insertion of microcalcification clusters imaged using an original acquisition system into normal images acquired with that same system utilizing our previously-developed image blending technique. We first performed a reader study to assess whether experienced observers could distinguish between computationally inserted and native clusters. For this purpose, we applied our insertion technique to clinical cases taken from the University of South Florida Digital Database for Screening Mammography (DDSM) and the Breast Cancer Digital Repository (BCDR). Regions of interest containing microcalcification clusters from one breast of a patient were inserted into the contralateral breast of the same patient. The reader study included 55 native clusters and their 55 inserted counterparts. Analysis of the reader ratings using receiver operating characteristic (ROC) methodology indicated that inserted clusters cannot be reliably distinguished from native clusters (area under the ROC curve, AUC=0.58±0.04). Furthermore, CADe sensitivity was evaluated on mammograms with native and inserted microcalcification clusters using a commercial CADe system. For this purpose, we used full field digital mammograms (FFDMs) from 68 clinical cases, acquired at the University of Michigan Health System. The average sensitivities for native and inserted clusters were equal, 85.3% (58/68). These results demonstrate the feasibility of using the inserted microcalcification clusters for assessing mammographic CAD devices.

  13. TU-EF-204-02: Hiigh Quality and Sub-MSv Cerebral CT Perfusion Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ke; Niu, Kai; Wu, Yijing

    2015-06-15

    Purpose: CT Perfusion (CTP) imaging is of great importance in acute ischemic stroke management due to its potential to detect hypoperfused yet salvageable tissue and distinguish it from definitely unsalvageable tissue. However, current CTP imaging suffers from poor image quality and high radiation dose (up to 5 mSv). The purpose of this work was to demonstrate that technical innovations such as Prior Image Constrained Compressed Sensing (PICCS) have the potential to address these challenges and achieve high quality and sub-mSv CTP imaging. Methods: (1) A spatial-temporal 4D cascaded system model was developed to indentify the bottlenecks in the current CTPmore » technology; (2) A task-based framework was developed to optimize the CTP system parameters; (3) Guided by (1) and (2), PICCS was customized for the reconstruction of CTP source images. Digital anthropomorphic perfusion phantoms, animal studies, and preliminary human subject studies were used to validate and evaluate the potentials of using these innovations to advance the CTP technology. Results: The 4D cascaded model was validated in both phantom and canine stroke models. Based upon this cascaded model, it has been discovered that, as long as the spatial resolution and noise properties of the 4D source CT images are given, the 3D MTF and NPS of the final CTP maps can be analytically derived for a given set of processing methods and parameters. The cascaded model analysis also identified that the most critical technical factor in CTP is how to acquire and reconstruct high quality source images; it has very little to do with the denoising techniques often used after parametric perfusion calculations. This explained why PICCS resulted in a five-fold dose reduction or substantial improvement in image quality. Conclusion: Technical innovations generated promising results towards achieving high quality and sub-mSv CTP imaging for reliable and safe assessment of acute ischemic strokes. K. Li, K. Niu, Y. Wu: Nothing to disclose. G.-H. Chen: Research funded, GE Healthcare; Research funded, Siemens AX.« less

  14. A comparative analysis of 7.0-Tesla magnetic resonance imaging and histology measurements of knee articular cartilage in a canine posterolateral knee injury model: a preliminary analysis.

    PubMed

    Pepin, Scott R; Griffith, Chad J; Wijdicks, Coen A; Goerke, Ute; McNulty, Margaret A; Parker, Josh B; Carlson, Cathy S; Ellermann, Jutta; LaPrade, Robert F

    2009-11-01

    There has recently been increased interest in the use of 7.0-T magnetic resonance imaging for evaluating articular cartilage degeneration and quantifying the progression of osteoarthritis. The purpose of this study was to evaluate articular cartilage cross-sectional area and maximum thickness in the medial compartment of intact and destabilized canine knees using 7.0-T magnetic resonance images and compare these results with those obtained from the corresponding histologic sections. Controlled laboratory study. Five canines had a surgically created unilateral grade III posterolateral knee injury that was followed for 6 months before euthanasia. The opposite, noninjured knee was used as a control. At necropsy, 3-dimensional gradient echo images of the medial tibial plateau of both knees were obtained using a 7.0-T magnetic resonance imaging scanner. Articular cartilage area and maximum thickness in this site were digitally measured on the magnetic resonance images. The proximal tibias were processed for routine histologic analysis with hematoxylin and eosin staining. Articular cartilage area and maximum thickness were measured in histologic sections corresponding to the sites of the magnetic resonance slices. The magnetic resonance imaging results revealed an increase in articular cartilage area and maximum thickness in surgical knees compared with control knees in all specimens; these changes were significant for both parameters (P <.05 for area; P <.01 for thickness). The average increase in area was 14.8% and the average increase in maximum thickness was 15.1%. The histologic results revealed an average increase in area of 27.4% (P = .05) and an average increase in maximum thickness of 33.0% (P = .06). Correlation analysis between the magnetic resonance imaging and histology data revealed that the area values were significantly correlated (P < .01), but the values for thickness obtained from magnetic resonance imaging were not significantly different from the histology sections (P > .1). These results demonstrate that 7.0-T magnetic resonance imaging provides an alternative method to histology to evaluate early osteoarthritic changes in articular cartilage in a canine model by detecting increases in articular cartilage area. The noninvasive nature of 7.0-T magnetic resonance imaging will allow for in vivo monitoring of osteoarthritis progression and intervention in animal models and humans for osteoarthritis.

  15. Macula segmentation and fovea localization employing image processing and heuristic based clustering for automated retinal screening.

    PubMed

    R, GeethaRamani; Balasubramanian, Lakshmi

    2018-07-01

    Macula segmentation and fovea localization is one of the primary tasks in retinal analysis as they are responsible for detailed vision. Existing approaches required segmentation of retinal structures viz. optic disc and blood vessels for this purpose. This work avoids knowledge of other retinal structures and attempts data mining techniques to segment macula. Unsupervised clustering algorithm is exploited for this purpose. Selection of initial cluster centres has a great impact on performance of clustering algorithms. A heuristic based clustering in which initial centres are selected based on measures defining statistical distribution of data is incorporated in the proposed methodology. The initial phase of proposed framework includes image cropping, green channel extraction, contrast enhancement and application of mathematical closing. Then, the pre-processed image is subjected to heuristic based clustering yielding a binary map. The binary image is post-processed to eliminate unwanted components. Finally, the component which possessed the minimum intensity is finalized as macula and its centre constitutes the fovea. The proposed approach outperforms existing works by reporting that 100%,of HRF, 100% of DRIVE, 96.92% of DIARETDB0, 97.75% of DIARETDB1, 98.81% of HEI-MED, 90% of STARE and 99.33% of MESSIDOR images satisfy the 1R criterion, a standard adopted for evaluating performance of macula and fovea identification. The proposed system thus helps the ophthalmologists in identifying the macula thereby facilitating to identify if any abnormality is present within the macula region. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Single-shot spiral imaging at 7 T.

    PubMed

    Engel, Maria; Kasper, Lars; Barmet, Christoph; Schmid, Thomas; Vionnet, Laetitia; Wilm, Bertram; Pruessmann, Klaas P

    2018-03-25

    The purpose of this work is to explore the feasibility and performance of single-shot spiral MRI at 7 T, using an expanded signal model for reconstruction. Gradient-echo brain imaging is performed on a 7 T system using high-resolution single-shot spiral readouts and half-shot spirals that perform dual-image acquisition after a single excitation. Image reconstruction is based on an expanded signal model including the encoding effects of coil sensitivity, static off-resonance, and magnetic field dynamics. The latter are recorded concurrently with image acquisition, using NMR field probes. The resulting image resolution is assessed by point spread function analysis. Single-shot spiral imaging is achieved at a nominal resolution of 0.8 mm, using spiral-out readouts of 53-ms duration. High depiction fidelity is achieved without conspicuous blurring or distortion. Effective resolutions are assessed as 0.8, 0.94, and 0.98 mm in CSF, gray matter and white matter, respectively. High image quality is also achieved with half-shot acquisition yielding image pairs at 1.5-mm resolution. Use of an expanded signal model enables single-shot spiral imaging at 7 T with unprecedented image quality. Single-shot and half-shot spiral readouts deploy the sensitivity benefit of high field for rapid high-resolution imaging, particularly for functional MRI and arterial spin labeling. © 2018 International Society for Magnetic Resonance in Medicine.

  17. SHORT COMMUNICATION: An image processing approach to calibration of hydrometers

    NASA Astrophysics Data System (ADS)

    Lorefice, S.; Malengo, A.

    2004-06-01

    The usual method adopted for multipoint calibration of glass hydrometers is based on the measurement of the buoyancy by hydrostatic weighing when the hydrometer is plunged in a reference liquid up to the scale mark to be calibrated. An image processing approach is proposed by the authors to align the relevant scale mark with the reference liquid surface level. The method uses image analysis with a data processing technique and takes into account the perspective error. For this purpose a CCD camera with a pixel matrix of 604H × 576V and a lens of 16 mm focal length were used. High accuracy in the hydrometer reading was obtained as the resulting reading uncertainty was lower than 0.02 mm, about a fifth of the usual figure with the visual reading made by an operator.

  18. Remote assessment of acne: the use of acne grading tools to evaluate digital skin images.

    PubMed

    Bergman, Hagit; Tsai, Kenneth Y; Seo, Su-Jean; Kvedar, Joseph C; Watson, Alice J

    2009-06-01

    Digital imaging of dermatology patients is a novel approach to remote data collection. A number of assessment tools have been developed to grade acne severity and to track clinical progress over time. Although these tools have been validated when used in a face-to-face setting, their efficacy and reliability when used to assess digital images have not been examined. The main purpose of this study was to determine whether specific assessment tools designed to grade acne during face-to-face visits can be applied to the evaluation of digital images. The secondary purpose was to ascertain whether images obtained by subjects are of adequate quality to allow such assessments to be made. Three hundred (300) digital images of patients with mild to moderate facial inflammatory acne from an ongoing randomized-controlled study were included in this analysis. These images were obtained from 20 patients and consisted of sets of 3 images taken over time. Of these images, 120 images were captured by subjects themselves and 180 were taken by study staff. Subjects were asked to retake their photographs if the initial images were deemed of poor quality by study staff. Images were evaluated by two dermatologists-in-training using validated acne assessment measures: Total Inflammatory Lesion Count, Leeds technique, and the Investigator's Global Assessment. Reliability of raters was evaluated using correlation coefficients and kappa statistics. Of the different acne assessment measures tested, the inter-rater reliability was highest for the total inflammatory lesion count (r = 0.871), but low for the Leeds technique (kappa = 0.381) and global assessment (kappa = 0.3119). Raters were able to evaluate over 89% of all images using each type of acne assessment measure despite the fact that images obtained by study staff were of higher quality than those obtained by patients (p < 0.001). Several existing clinical assessment measures can be used to evaluate digital images obtained from subjects with inflammatory acne lesions. The level of inter-rater agreement is highly variable across assessment measures, and we found the Total Inflammatory Lesion Count to be the most reliable. This measure could be used to allow a dermatologist to remotely track a patient's progress over time.

  19. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  20. Object-based image analysis for cadastral mapping using satellite images

    NASA Astrophysics Data System (ADS)

    Kohli, D.; Crommelinck, S.; Bennett, R.; Koeva, M.; Lemmen, C.

    2017-10-01

    Cadasters together with land registry form a core ingredient of any land administration system. Cadastral maps comprise of the extent, ownership and value of land which are essential for recording and updating land records. Traditional methods for cadastral surveying and mapping often prove to be labor, cost and time intensive: alternative approaches are thus being researched for creating such maps. With the advent of very high resolution (VHR) imagery, satellite remote sensing offers a tremendous opportunity for (semi)-automation of cadastral boundaries detection. In this paper, we explore the potential of object-based image analysis (OBIA) approach for this purpose by applying two segmentation methods, i.e. MRS (multi-resolution segmentation) and ESP (estimation of scale parameter) to identify visible cadastral boundaries. Results show that a balance between high percentage of completeness and correctness is hard to achieve: a low error of commission often comes with a high error of omission. However, we conclude that the resulting segments/land use polygons can potentially be used as a base for further aggregation into tenure polygons using participatory mapping.

  1. Exploratory analysis of diffusion tensor imaging in children with attention deficit hyperactivity disorder: evidence of abnormal white matter structure.

    PubMed

    Pastura, Giuseppe; Doering, Thomas; Gasparetto, Emerson Leandro; Mattos, Paulo; Araújo, Alexandra Prüfer

    2016-06-01

    Abnormalities in the white matter microstructure of the attentional system have been implicated in the aetiology of attention deficit hyperactivity disorder (ADHD). Diffusion tensor imaging (DTI) is a promising magnetic resonance imaging (MRI) technology that has increasingly been used in studies of white matter microstructure in the brain. The main objective of this work was to perform an exploratory analysis of white matter tracts in a sample of children with ADHD versus typically developing children (TDC). For this purpose, 13 drug-naive children with ADHD of both genders underwent MRI using DTI acquisition methodology and tract-based spatial statistics. The results were compared to those of a sample of 14 age- and gender-matched TDC. Lower fractional anisotropy was observed in the splenium of the corpus callosum, right superior longitudinal fasciculus, bilateral retrolenticular part of the internal capsule, bilateral inferior fronto-occipital fasciculus, left external capsule and posterior thalamic radiation (including right optic radiation). We conclude that white matter tracts in attentional and motor control systems exhibited signs of abnormal microstructure in this sample of drug-naive children with ADHD.

  2. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques.

    PubMed

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-12-01

    Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications.

  3. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques

    PubMed Central

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-01-01

    Background: Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. Methods: In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. Results: With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Conclusion: Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications. PMID:28077898

  4. Computerized quantitative evaluation of mammographic accreditation phantom images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Yongbum; Tsai, Du-Yih; Shinohara, Norimitsu

    2010-12-15

    Purpose: The objective was to develop and investigate an automated scoring scheme of the American College of Radiology (ACR) mammographic accreditation phantom (RMI 156, Middleton, WI) images. Methods: The developed method consisted of background subtraction, determination of region of interest, classification of fiber and mass objects by Mahalanobis distance, detection of specks by template matching, and rule-based scoring. Fifty-one phantom images were collected from 51 facilities for this study (one facility provided one image). A medical physicist and two radiologic technologists also scored the images. The human and computerized scores were compared. Results: In terms of meeting the ACR's criteria,more » the accuracies of the developed method for computerized evaluation of fiber, mass, and speck were 90%, 80%, and 98%, respectively. Contingency table analysis revealed significant association between observer and computer scores for microcalcifications (p<5%) but not for masses and fibers. Conclusions: The developed method may achieve a stable assessment of visibility for test objects in mammographic accreditation phantom image in whether the phantom image meets the ACR's criteria in the evaluation test, although there is room left for improvement in the approach for fiber and mass objects.« less

  5. Image analysis of pulmonary nodules using micro CT

    NASA Astrophysics Data System (ADS)

    Niki, Noboru; Kawata, Yoshiki; Fujii, Masashi; Kakinuma, Ryutaro; Moriyama, Noriyuki; Tateno, Yukio; Matsui, Eisuke

    2001-07-01

    We are developing a micro-computed tomography (micro CT) system for imaging pulmonary nodules. The purpose is to enhance the physician performance in accessing the micro- architecture of the nodule for classification between malignant and benign nodules. The basic components of the micro CT system consist of microfocus X-ray source, a specimen manipulator, and an image intensifier detector coupled to charge-coupled device (CCD) camera. 3D image reconstruction was performed by the slice. A standard fan- beam convolution and backprojection algorithm was used to reconstruct the center plane intersecting the X-ray source. The preprocessing of the 3D image reconstruction included the correction of the geometrical distortions and the shading artifact introduced by the image intensifier. The main advantage of the system is to obtain a high spatial resolution which ranges between b micrometers and 25 micrometers . In this work we report on preliminary studies performed with the micro CT for imaging resected tissues of normal and abnormal lung. Experimental results reveal micro architecture of lung tissues, such as alveolar wall, septal wall of pulmonary lobule, and bronchiole. From the results, the micro CT system is expected to have interesting potentials for high confidential differential diagnosis.

  6. Quantification of early cutaneous manifestations of chronic venous insufficiency by automated analysis of photographic images: Feasibility and technical considerations.

    PubMed

    Becker, François; Fourgeau, Patrice; Carpentier, Patrick H; Ouchène, Amina

    2018-06-01

    We postulate that blue telangiectasia and brownish pigmentation at ankle level, early markers of chronic venous insufficiency, can be quantified for longitudinal studies of chronic venous disease in Caucasian people. Objectives and methods To describe a photographic technique specially developed for this purpose. The pictures were acquired using a dedicated photo stand to position the foot in a reproducible way, with a normalized lighting and acquisition protocol. The image analysis was performed with a tool developed using algorithms optimized to detect and quantify blue telangiectasia and brownish pigmentation and their relative surface in the region of interest. To test the short-term reproducibility of the measures. Results The quantification of the blue telangiectasia and of the brownish pigmentation using an automated digital photo analysis is feasible. The short-term reproducibility is good for blue telangiectasia quantification. It is a less accurate for the brownish pigmentation. Conclusion The blue telangiectasia of the corona phlebectatica and the ankle flare can be assessed using a clinimetric approach based on the automated digital photo analysis.

  7. Laser Doppler imaging of cutaneous blood flow through transparent face masks: a necessary preamble to computer-controlled rapid prototyping fabrication with submillimeter precision.

    PubMed

    Allely, Rebekah R; Van-Buendia, Lan B; Jeng, James C; White, Patricia; Wu, Jingshu; Niszczak, Jonathan; Jordan, Marion H

    2008-01-01

    A paradigm shift in management of postburn facial scarring is lurking "just beneath the waves" with the widespread availability of two recent technologies: precise three-dimensional scanning/digitizing of complex surfaces and computer-controlled rapid prototyping three-dimensional "printers". Laser Doppler imaging may be the sensible method to track the scar hyperemia that should form the basis of assessing progress and directing incremental changes in the digitized topographical face mask "prescription". The purpose of this study was to establish feasibility of detecting perfusion through transparent face masks using the Laser Doppler Imaging scanner. Laser Doppler images of perfusion were obtained at multiple facial regions on five uninjured staff members. Images were obtained without a mask, followed by images with a loose fitting mask with and without a silicone liner, and then with a tight fitting mask with and without a silicone liner. Right and left oblique images, in addition to the frontal images, were used to overcome unobtainable measurements at the extremes of face mask curvature. General linear model, mixed model, and t tests were used for data analysis. Three hundred seventy-five measurements were used for analysis, with a mean perfusion unit of 299 and pixel validity of 97%. The effect of face mask pressure with and without the silicone liner was readily quantified with significant changes in mean cutaneous blood flow (P < .5). High valid pixel rate laser Doppler imager flow data can be obtained through transparent face masks. Perfusion decreases with the application of pressure and with silicone. Every participant measured differently in perfusion units; however, consistent perfusion patterns in the face were observed.

  8. Clinical feasibility of simultaneous multi-slice imaging with blipped-CAIPI for diffusion-weighted imaging and diffusion-tensor imaging of the brain.

    PubMed

    Yokota, Hajime; Sakai, Koji; Tazoe, Jun; Goto, Mariko; Imai, Hiroshi; Teramukai, Satoshi; Yamada, Kei

    2017-12-01

    Background Simultaneous multi-slice (SMS) imaging is starting to be used in clinical situation, although evidence of clinical feasibility is scanty. Purpose To prospectively assess the clinical feasibility of SMS diffusion-weighted imaging (DWI) and diffusion-tensor imaging (DTI) with blipped-controlled aliasing in parallel imaging for brain lesions. Material and Methods The institutional review board approved this study. This study included 156 hyperintense lesions on DWI from 32 patients. A slice acceleration factor of 2 was applied for SMS scans, which allowed shortening of the scan time by 41.3%. The signal-to-noise ratio (SNR) was calculated for brain tissue of a selected slice. The contrast-to-noise ratio (CNR), apparent diffusion coefficient (ADC), and fractional anisotropy (FA) were calculated in 36 hyperintense lesions with a diameter of three pixels or more. Visual assessment was performed for all 156 lesions. Tractography of the corticospinal tract of 29 patients was evaluated. The number of tracts and averaged tract length were used for quantitative analysis, and visual assessment was evaluated by grading. Results The SMS scan showed no bias and acceptable 95% limits of agreement compared to conventional scans in SNR, CNR, and ADC on Bland-Altman analyses. Only FA of the lesions was higher in the SMS scan by 9% ( P = 0.016), whereas FA of the surrounding tissues was similar. Quantitative analysis of tractography showed similar values. Visual assessment of DWI hyperintense lesions and tractography also resulted in comparable evaluation. Conclusion SMS imaging was clinically feasible for imaging quality and quantitative values compared with conventional DWI and DTI.

  9. ALA-PpIX variability quantitatively imaged in A431 epidermoid tumors using in vivo ultrasound fluorescence tomography and ex vivo assay

    NASA Astrophysics Data System (ADS)

    DSouza, Alisha V.; Flynn, Brendan P.; Gunn, Jason R.; Samkoe, Kimberley S.; Anand, Sanjay; Maytin, Edward V.; Hasan, Tayyaba; Pogue, Brian W.

    2014-03-01

    Treatment monitoring of Aminolevunilic-acid (ALA) - Photodynamic Therapy (PDT) of basal-cell carcinoma (BCC) calls for superficial and subsurface imaging techniques. While superficial imagers exist for this purpose, their ability to assess PpIX levels in thick lesions is poor; additionally few treatment centers have the capability to measure ALA-induced PpIX production. An area of active research is to improve treatments to deeper and nodular BCCs, because treatment is least effective in these. The goal of this work was to understand the logistics and technical capabilities to quantify PpIX at depths over 1mm, using a novel hybrid ultrasound-guided, fiber-based fluorescence molecular spectroscopictomography system. This system utilizes a 633nm excitation laser and detection using filtered spectrometers. Source and detection fibers are collinear so that their imaging plane matches that of ultrasound transducer. Validation with phantoms and tumor-simulating fluorescent inclusions in mice showed sensitivity to fluorophore concentrations as low as 0.025μg/ml at 4mm depth from surface, as presented in previous years. Image-guided quantification of ALA-induced PpIX production was completed in subcutaneous xenograft epidermoid cancer tumor model A431 in nude mice. A total of 32 animals were imaged in-vivo, using several time points, including pre-ALA, 4-hours post-ALA, and 24-hours post-ALA administration. On average, PpIX production in tumors increased by over 10-fold, 4-hours post-ALA. Statistical analysis of PpIX fluorescence showed significant difference among all groups; p<0.05. Results were validated by exvivo imaging of resected tumors. Details of imaging, analysis and results will be presented to illustrate variability and the potential for imaging these values at depth.

  10. Comparison of pediatric radiation dose and vessel visibility on angiographic systems using piglets as a surrogate: antiscatter grid removal vs. lower detector air kerma settings with a grid — a preclinical investigation

    PubMed Central

    Racadio, John M.; Abruzzo, Todd A.; Johnson, Neil D.; Patel, Manish N.; Kukreja, Kamlesh U.; den Hartog, Mark. J. H.; Hoornaert, Bart P.A.; Nachabe, Rami A.

    2015-01-01

    The purpose of this study was to reduce pediatric doses while maintaining or improving image quality scores without removing the grid from X‐ray beam. This study was approved by the Institutional Animal Care and Use Committee. Three piglets (5, 14, and 20 kg) were imaged using six different selectable detector air kerma (Kair) per frame values (100%, 70%, 50%, 35%, 25%, 17.5%) with and without the grid. Number of distal branches visualized with diagnostic confidence relative to the injected vessel defined image quality score. Five pediatric interventional radiologists evaluated all images. Image quality score and piglet Kair were statistically compared using analysis of variance and receiver operating curve analysis to define the preferred dose setting and use of grid for a visibility of 2nd and 3rd order vessel branches. Grid removal reduced both dose to subject and imaging quality by 26%. Third order branches could only be visualized with the grid present; 100% detector Kair was required for smallest pig, while 70% detector Kair was adequate for the two larger pigs. Second order branches could be visualized with grid at 17.5% detector Kair for all three pig sizes. Without the grid, 50%, 35%, and 35% detector Kair were required for smallest to largest pig, respectively. Grid removal reduces both dose and image quality score. Image quality scores can be maintained with less dose to subject with the grid in the beam as opposed to removed. Smaller anatomy requires more dose to the detector to achieve the same image quality score. PACS numbers: 87.53.Bn, 87.57.N‐, 87.57.cj, 87.59.cf, 87.59.Dj PMID:26699297

  11. Improved wheal detection from skin prick test images

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan

    2014-03-01

    Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.

  12. GPU-based prompt gamma ray imaging from boron neutron capture therapy.

    PubMed

    Yoon, Do-Kun; Jung, Joo-Young; Jo Hong, Key; Sil Lee, Keum; Suk Suh, Tae

    2015-01-01

    The purpose of this research is to perform the fast reconstruction of a prompt gamma ray image using a graphics processing unit (GPU) computation from boron neutron capture therapy (BNCT) simulations. To evaluate the accuracy of the reconstructed image, a phantom including four boron uptake regions (BURs) was used in the simulation. After the Monte Carlo simulation of the BNCT, the modified ordered subset expectation maximization reconstruction algorithm using the GPU computation was used to reconstruct the images with fewer projections. The computation times for image reconstruction were compared between the GPU and the central processing unit (CPU). Also, the accuracy of the reconstructed image was evaluated by a receiver operating characteristic (ROC) curve analysis. The image reconstruction time using the GPU was 196 times faster than the conventional reconstruction time using the CPU. For the four BURs, the area under curve values from the ROC curve were 0.6726 (A-region), 0.6890 (B-region), 0.7384 (C-region), and 0.8009 (D-region). The tomographic image using the prompt gamma ray event from the BNCT simulation was acquired using the GPU computation in order to perform a fast reconstruction during treatment. The authors verified the feasibility of the prompt gamma ray image reconstruction using the GPU computation for BNCT simulations.

  13. Estimation of T2* Relaxation Time of Breast Cancer: Correlation with Clinical, Imaging and Pathological Features

    PubMed Central

    Seo, Mirinae; Jahng, Geon-Ho; Sohn, Yu-Mee; Rhee, Sun Jung; Oh, Jang-Hoon; Won, Kyu-Yeoun

    2017-01-01

    Objective The purpose of this study was to estimate the T2* relaxation time in breast cancer, and to evaluate the association between the T2* value with clinical-imaging-pathological features of breast cancer. Materials and Methods Between January 2011 and July 2013, 107 consecutive women with 107 breast cancers underwent multi-echo T2*-weighted imaging on a 3T clinical magnetic resonance imaging system. The Student's t test and one-way analysis of variance were used to compare the T2* values of cancer for different groups, based on the clinical-imaging-pathological features. In addition, multiple linear regression analysis was performed to find independent predictive factors associated with the T2* values. Results Of the 107 breast cancers, 92 were invasive and 15 were ductal carcinoma in situ (DCIS). The mean T2* value of invasive cancers was significantly longer than that of DCIS (p = 0.029). Signal intensity on T2-weighted imaging (T2WI) and histologic grade of invasive breast cancers showed significant correlation with T2* relaxation time in univariate and multivariate analysis. Breast cancer groups with higher signal intensity on T2WI showed longer T2* relaxation time (p = 0.005). Cancer groups with higher histologic grade showed longer T2* relaxation time (p = 0.017). Conclusion The T2* value is significantly longer in invasive cancer than in DCIS. In invasive cancers, T2* relaxation time is significantly longer in higher histologic grades and high signal intensity on T2WI. Based on these preliminary data, quantitative T2* mapping has the potential to be useful in the characterization of breast cancer. PMID:28096732

  14. Computer-aided pulmonary image analysis in small animal models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J.

    Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next.more » The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.« less

  15. The Extraction of Terrace in the Loess Plateau Based on radial method

    NASA Astrophysics Data System (ADS)

    Liu, W.; Li, F.

    2016-12-01

    The terrace of Loess Plateau, as a typical kind of artificial landform and an important measure of soil and water conservation, its positioning and automatic extraction will simplify the work of land use investigation. The existing methods of terrace extraction mainly include visual interpretation and automatic extraction. The manual method is used in land use investigation, but it is time-consuming and laborious. Researchers put forward some automatic extraction methods. For example, Fourier transform method can recognize terrace and find accurate position from frequency domain image, but it is more affected by the linear objects in the same direction of terrace; Texture analysis method is simple and have a wide range application of image processing. The disadvantage of texture analysis method is unable to recognize terraces' edge; Object-oriented is a new method of image classification, but when introduce it to terrace extracting, fracture polygons will be the most serious problem and it is difficult to explain its geological meaning. In order to positioning the terraces, we use high- resolution remote sensing image to extract and analyze the gray value of the pixels which the radial went through. During the recognition process, we firstly use the DEM data analysis or by manual selecting, to roughly confirm the position of peak points; secondly, take each of the peak points as the center to make radials in all directions; finally, extracting the gray values of the pixels which the radials went through, and analyzing its changing characteristics to confirm whether the terrace exists. For the purpose of getting accurate position of terrace, terraces' discontinuity, extension direction, ridge width, image processing algorithm, remote sensing image illumination and other influence factors were fully considered when designing the algorithms.

  16. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; Russell, Samuel S.

    2012-01-01

    Objective Develop a software application utilizing high performance computing techniques, including general purpose graphics processing units (GPGPUs), for the analysis and visualization of large thermographic data sets. Over the past several years, an increasing effort among scientists and engineers to utilize graphics processing units (GPUs) in a more general purpose fashion is allowing for previously unobtainable levels of computation by individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU which yield significant increases in performance. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Image processing is one area were GPUs are being used to greatly increase the performance of certain analysis and visualization techniques.

  17. Graphics Processing Unit Assisted Thermographic Compositing

    NASA Technical Reports Server (NTRS)

    Ragasa, Scott; McDougal, Matthew; Russell, Sam

    2013-01-01

    Objective: To develop a software application utilizing general purpose graphics processing units (GPUs) for the analysis of large sets of thermographic data. Background: Over the past few years, an increasing effort among scientists and engineers to utilize the GPU in a more general purpose fashion is allowing for supercomputer level results at individual workstations. As data sets grow, the methods to work them grow at an equal, and often greater, pace. Certain common computations can take advantage of the massively parallel and optimized hardware constructs of the GPU to allow for throughput that was previously reserved for compute clusters. These common computations have high degrees of data parallelism, that is, they are the same computation applied to a large set of data where the result does not depend on other data elements. Signal (image) processing is one area were GPUs are being used to greatly increase the performance of certain algorithms and analysis techniques.

  18. Characterization of extreme ultraviolet laser ablation mass spectrometry for actinide trace analysis and nanoscale isotopic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, Tyler; Kuznetsov, Ilya; Willingham, David

    The purpose of this research was to characterize Extreme Ultraviolet Time-of-Flight (EUV TOF) Laser Ablation Mass Spectrometry for high spatial resolution elemental and isotopic analysis. We compare EUV TOF results with Secondary Ionization Mass Spectrometry (SIMS) to orient the EUV TOF method within the overall field of analytical mass spectrometry. Using the well-characterized NIST 61x glasses, we show that the EUV ionization approach produces relatively few molecular ion interferences in comparison to TOF SIMS. We demonstrate that the ratio of element ion to element oxide ion is adjustable with EUV laser pulse energy and that the EUV TOF instrument hasmore » a sample utilization efficiency of 0.014%. The EUV TOF system also achieves a lateral resolution of 80 nm and we demonstrate this lateral resolution with isotopic imaging of closely spaced particles or uranium isotopic standard materials.« less

  19. Analysis and Exchange of Multimedia Laboratory Data Using the Brain Database

    PubMed Central

    Wertheim, Steven L.

    1990-01-01

    Two principal goals of the Brain Database are: 1) to support laboratory data collection and analysis of multimedia information about the nervous system and 2) to support exchange of these data among researchers and clinicians who may be physically distant. This has been achieved by an implementation of experimental and clinical records within a relational database. An Image Series Editor has been created that provides a graphical interface to these data for the purposes of annotation, quantification and other analyses. Cooperating laboratories each maintain their own copies of the Brain Database to which they may add private data. Although the data in a given experimental or patient record will be distributed among many tables and external image files, the user can treat each record as a unit that can be extracted from the local database and sent to a distant colleague.

  20. Cost-effectiveness of magnetic resonance imaging versus ultrasound for the detection of symptomatic full-thickness supraspinatus tendon tears.

    PubMed

    Gyftopoulos, Soterios; Guja, Kip E; Subhas, Naveen; Virk, Mandeep S; Gold, Heather T

    2017-12-01

    The purpose of this study was to determine the value of magnetic resonance imaging (MRI) and ultrasound-based imaging strategies in the evaluation of a hypothetical population with a symptomatic full-thickness supraspinatus tendon (FTST) tear using formal cost-effectiveness analysis. A decision analytic model from the health care system perspective for 60-year-old patients with symptoms secondary to a suspected FTST tear was used to evaluate the incremental cost-effectiveness of 3 imaging strategies during a 2-year time horizon: MRI, ultrasound, and ultrasound followed by MRI. Comprehensive literature search and expert opinion provided data on cost, probability, and quality of life estimates. The primary effectiveness outcome was quality-adjusted life-years (QALYs) through 2 years, with a willingness-to-pay threshold set to $100,000/QALY gained (2016 U.S. dollars). Costs and health benefits were discounted at 3%. Ultrasound was the least costly strategy ($1385). MRI was the most effective (1.332 QALYs). Ultrasound was the most cost-effective strategy but was not dominant. The incremental cost-effectiveness ratio for MRI was $22,756/QALY gained, below the willingness-to-pay threshold. Two-way sensitivity analysis demonstrated that MRI was favored over the other imaging strategies over a wide range of reasonable costs. In probabilistic sensitivity analysis, MRI was the preferred imaging strategy in 78% of the simulations. MRI and ultrasound represent cost-effective imaging options for evaluation of the patient thought to have a symptomatic FTST tear. The results indicate that MRI is the preferred strategy based on cost-effectiveness criteria, although the decision between MRI and ultrasound for an imaging center is likely to be dependent on additional factors, such as available resources and workflow. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

Top