Science.gov

Sample records for image analysis

  1. Retinal Imaging and Image Analysis

    PubMed Central

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

  2. Image Analysis and Modeling

    DTIC Science & Technology

    1976-03-01

    This report summarizes the results of the research program on Image Analysis and Modeling supported by the Defense Advanced Research Projects Agency...The objective is to achieve a better understanding of image structure and to use this knowledge to develop improved image models for use in image ... analysis and processing tasks such as information extraction, image enhancement and restoration, and coding. The ultimate objective of this research is

  3. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  4. Basics of image analysis

    USDA-ARS?s Scientific Manuscript database

    Hyperspectral imaging technology has emerged as a powerful tool for quality and safety inspection of food and agricultural products and in precision agriculture over the past decade. Image analysis is a critical step in implementing hyperspectral imaging technology; it is aimed to improve the qualit...

  5. Forensic video image analysis

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas R.

    1997-02-01

    Forensic video image analysis is a new scientific tool for perpetrator enhancement and identification in poorly recorded crime scene situations. Forensic video image analysis is emerging technology for law enforcement, industrial security and surveillance addressing the following problems often found in these poor quality video recorded incidences.

  6. Multisensor Image Analysis System

    DTIC Science & Technology

    1993-04-15

    AD-A263 679 II Uli! 91 Multisensor Image Analysis System Final Report Authors. Dr. G. M. Flachs Dr. Michael Giles Dr. Jay Jordan Dr. Eric...or decision, unless so designated by other documentation. 93-09739 *>ft s n~. now illlllM3lMVf Multisensor Image Analysis System Final...Multisensor Image Analysis System 3. REPORT TYPE AND DATES COVERED FINAL: LQj&tt-Z JZOfVL 5. FUNDING NUMBERS 93 > 6. AUTHOR(S) Drs. Gerald

  7. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  8. Medical Image Analysis Facility

    NASA Technical Reports Server (NTRS)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  9. Digital Image Analysis of Cereals

    USDA-ARS?s Scientific Manuscript database

    Image analysis is the extraction of meaningful information from images, mainly digital images by means of digital processing techniques. The field was established in the 1950s and coincides with the advent of computer technology, as image analysis is profoundly reliant on computer processing. As t...

  10. Interactive Image Analysis System Design,

    DTIC Science & Technology

    1982-12-01

    This report describes a design for an interactive image analysis system (IIAS), which implements terrain data extraction techniques. The design... analysis system. Additionally, the system is fully capable of supporting many generic types of image analysis and data processing, and is modularly...employs commercially available, state of the art minicomputers and image display devices with proven software to achieve a cost effective, reliable image

  11. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  12. Moving Image Analysis System

    NASA Astrophysics Data System (ADS)

    Shifley, Loren A.

    1989-02-01

    The recent introduction of a two dimensional interactive software package provides a new technique for quantitative analysis. Integrated with its corresponding peripherals, the same software offers either film or video data reduction. Digitized data points measured from the images are stored in the computer. With is data, a variety of information can be displayed, printed or plotted in a graphical form. The resultant graphs could determine such factors as: displacement, force, velocity, momentum, angular acceleration, center of gravity, energy, leng, , angle and time to name a few. Simple, efficient and precise analysis can now be quantified and documented. This paper will describe the detailed capabilities of the software along with a variety of applications where it might be used.

  13. Moving image analysis system

    NASA Astrophysics Data System (ADS)

    Shifley, Loren A.

    1990-08-01

    The recent introduction of a two dimensional interactive software package provides a new technique for quantitative analysis. Integrated with its corresponding peripherals, the same software offers either film or video data reduction. Digitized data points measured from the images are stored in the computer. With this data, a variety of information can be displayed, printed or plotted in a graphical form. The resultant graphs could determine such factors as: displacement, force, velocity, momentum, angular acceleration, center of gravity, energy, length, angle and time to name a few. Simple, efficient and precise analysis can now be quantified and documented. This paper will describe the detailed capabilities of the software along with a variety of applications where it might be used.

  14. Brain Imaging Analysis

    PubMed Central

    BOWMAN, F. DUBOIS

    2014-01-01

    The increasing availability of brain imaging technologies has led to intense neuroscientific inquiry into the human brain. Studies often investigate brain function related to emotion, cognition, language, memory, and numerous other externally induced stimuli as well as resting-state brain function. Studies also use brain imaging in an attempt to determine the functional or structural basis for psychiatric or neurological disorders and, with respect to brain function, to further examine the responses of these disorders to treatment. Neuroimaging is a highly interdisciplinary field, and statistics plays a critical role in establishing rigorous methods to extract information and to quantify evidence for formal inferences. Neuroimaging data present numerous challenges for statistical analysis, including the vast amounts of data collected from each individual and the complex temporal and spatial dependence present. We briefly provide background on various types of neuroimaging data and analysis objectives that are commonly targeted in the field. We present a survey of existing methods targeting these objectives and identify particular areas offering opportunities for future statistical contribution. PMID:25309940

  15. DIDA - Dynamic Image Disparity Analysis.

    DTIC Science & Technology

    1982-12-31

    Understanding, Dynamic Image Analysis , Disparity Analysis, Optical Flow, Real-Time Processing ___ 20. ABSTRACT (Continue on revere side If necessary aid identify...three aspects of dynamic image analysis must be studied: effectiveness, generality, and efficiency. In addition, efforts must be made to understand the...environment. A better understanding of the need for these Limiting constraints is required. Efficiency is obviously important if dynamic image analysis is

  16. Reflections on ultrasound image analysis.

    PubMed

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Spreadsheet-Like Image Analysis

    DTIC Science & Technology

    1992-08-01

    1 " DTIC AD-A254 395 S LECTE D, ° AD-E402 350 Technical Report ARPAD-TR-92002 SPREADSHEET-LIKE IMAGE ANALYSIS Paul Willson August 1992 U.S. ARMY...August 1992 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS SPREADSHEET-LIKE IMAGE ANALYSIS 6. AUTHOR(S) Paul Willson 7. PERFORMING ORGANIZATION NAME(S) AND...14. SUBJECT TERMS 15. NUMBER OF PAGES Image analysis , nondestructive inspection, spreadsheet, Macintosh software, 14 neural network, signal processing

  18. Knowledge-Based Image Analysis.

    DTIC Science & Technology

    1981-04-01

    UNCLASSIF1 ED ETL-025s N IIp ETL-0258 AL Ai01319 S"Knowledge-based image analysis u George C. Stockman Barbara A. Lambird I David Lavine Laveen N. Kanal...extraction, verification, region classification, pattern recognition, image analysis . 3 20. A. CT (Continue on rever.. d. It necessary and Identify by...UNCLgSTFTF n In f SECURITY CLASSIFICATION OF THIS PAGE (When Date Entered) .L1 - I Table of Contents Knowledge Based Image Analysis I Preface

  19. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  20. Oncological image analysis: medical and molecular image analysis

    NASA Astrophysics Data System (ADS)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  1. Paraxial ghost image analysis

    NASA Astrophysics Data System (ADS)

    Abd El-Maksoud, Rania H.; Sasian, José M.

    2009-08-01

    This paper develops a methodology to model ghost images that are formed by two reflections between the surfaces of a multi-element lens system in the paraxial regime. An algorithm is presented to generate the ghost layouts from the nominal layout. For each possible ghost layout, paraxial ray tracing is performed to determine the ghost Gaussian cardinal points, the size of the ghost image at the nominal image plane, the location and diameter of the ghost entrance and exit pupils, and the location and diameter for the ghost entrance and exit windows. The paraxial ghost irradiance point spread function is obtained by adding up the irradiance contributions for all ghosts. Ghost simulation results for a simple lens system are provided. This approach provides a quick way to analyze ghost images in the paraxial regime.

  2. Radiologist and automated image analysis

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.

    1999-07-01

    Significant advances are being made in the area of automated medical image analysis. Part of the progress is due to the general advances being made in the types of algorithms used to process images and perform various detection and recognition tasks. A more important reason for this growth in medical image analysis processes, may be due however to a very different reason. The use of computer workstations, digital image acquisition technologies and the use of CRT monitors for display of medical images for primary diagnostic reading is becoming more prevalent in radiology departments around the world. With the advance in computer- based displays, however, has come the realization that displaying images on a CRT monitor is not the same as displaying film on a viewbox. There are perceptual, cognitive and ergonomic issues that must be considered if radiologists are to accept this change in technology and display. The bottom line is that radiologists' performance must be evaluated with these new technologies and image analysis techniques in order to verify that diagnostic performance is at least as good with these new technologies and image analysis procedures as with film-based displays. The goal of this paper is to address some of the perceptual, cognitive and ergonomic issues associated with reading radiographic images from digital displays.

  3. Histopathological Image Analysis: A Review

    PubMed Central

    Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent

    2010-01-01

    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804

  4. Multispectral analysis of multimodal images.

    PubMed

    Kvinnsland, Yngve; Brekke, Njål; Taxt, Torfinn M; Grüner, Renate

    2009-01-01

    An increasing number of multimodal images represent a valuable increase in available image information, but at the same time it complicates the extraction of diagnostic information across the images. Multispectral analysis (MSA) has the potential to simplify this problem substantially as unlimited number of images can be combined, and tissue properties across the images can be extracted automatically. We have developed a software solution for MSA containing two algorithms for unsupervised classification, an EM-algorithm finding multinormal class descriptions and the k-means clustering algorithm, and two for supervised classification, a Bayesian classifier using multinormal class descriptions and a kNN-algorithm. The software has an efficient user interface for the creation and manipulation of class descriptions, and it has proper tools for displaying the results. The software has been tested on different sets of images. One application is to segment cross-sectional images of brain tissue (T1- and T2-weighted MR images) into its main normal tissues and brain tumors. Another interesting set of images are the perfusion maps and diffusion maps, derived images from raw MR images. The software returns segmentations that seem to be sensible. The MSA software appears to be a valuable tool for image analysis with multimodal images at hand. It readily gives a segmentation of image volumes that visually seems to be sensible. However, to really learn how to use MSA, it will be necessary to gain more insight into what tissues the different segments contain, and the upcoming work will therefore be focused on examining the tissues through for example histological sections.

  5. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  6. Image analysis for DNA sequencing

    NASA Astrophysics Data System (ADS)

    Palaniappan, Kannappan; Huang, Thomas S.

    1991-07-01

    There is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information.

  7. Errors from Image Analysis

    SciTech Connect

    Wood, William Monford

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  8. Anmap: Image and data analysis

    NASA Astrophysics Data System (ADS)

    Alexander, Paul; Waldram, Elizabeth; Titterington, David; Rees, Nick

    2014-11-01

    Anmap analyses and processes images and spectral data. Originally written for use in radio astronomy, much of its functionality is applicable to other disciplines; additional algorithms and analysis procedures allow direct use in, for example, NMR imaging and spectroscopy. Anmap emphasizes the analysis of data to extract quantitative results for comparison with theoretical models and/or other experimental data. To achieve this, Anmap provides a wide range of tools for analysis, fitting and modelling (including standard image and data processing algorithms). It also provides a powerful environment for users to develop their own analysis/processing tools either by combining existing algorithms and facilities with the very powerful command (scripting) language or by writing new routines in FORTRAN that integrate seamlessly with the rest of Anmap.

  9. Multispectral Imaging Broadens Cellular Analysis

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Amnis Corporation, a Seattle-based biotechnology company, developed ImageStream to produce sensitive fluorescence images of cells in flow. The company responded to an SBIR solicitation from Ames Research Center, and proposed to evaluate several methods of extending the depth of field for its ImageStream system and implement the best as an upgrade to its commercial products. This would allow users to view whole cells at the same time, rather than just one section of each cell. Through Phase I and II SBIR contracts, Ames provided Amnis the funding the company needed to develop this extended functionality. For NASA, the resulting high-speed image flow cytometry process made its way into Medusa, a life-detection instrument built to collect, store, and analyze sample organisms from erupting hydrothermal vents, and has the potential to benefit space flight health monitoring. On the commercial end, Amnis has implemented the process in ImageStream, combining high-resolution microscopy and flow cytometry in a single instrument, giving researchers the power to conduct quantitative analyses of individual cells and cell populations at the same time, in the same experiment. ImageStream is also built for many other applications, including cell signaling and pathway analysis; classification and characterization of peripheral blood mononuclear cell populations; quantitative morphology; apoptosis (cell death) assays; gene expression analysis; analysis of cell conjugates; molecular distribution; and receptor mapping and distribution.

  10. Multivariate image analysis in biomedicine.

    PubMed

    Nattkemper, Tim W

    2004-10-01

    In recent years, multivariate imaging techniques are developed and applied in biomedical research in an increasing degree. In research projects and in clinical studies as well m-dimensional multivariate images (MVI) are recorded and stored to databases for a subsequent analysis. The complexity of the m-dimensional data and the growing number of high throughput applications call for new strategies for the application of image processing and data mining to support the direct interactive analysis by human experts. This article provides an overview of proposed approaches for MVI analysis in biomedicine. After summarizing the biomedical MVI techniques the two level framework for MVI analysis is illustrated. Following this framework, the state-of-the-art solutions from the fields of image processing and data mining are reviewed and discussed. Motivations for MVI data mining in biology and medicine are characterized, followed by an overview of graphical and auditory approaches for interactive data exploration. The paper concludes with summarizing open problems in MVI analysis and remarks upon the future development of biomedical MVI analysis.

  11. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for

  12. A Unified Mathematical Approach to Image Analysis.

    DTIC Science & Technology

    1987-08-31

    describes four instances of the paradigm in detail. Directions for ongoing and future research are also indicated. Keywords: Image processing; Algorithms; Segmentation; Boundary detection; tomography; Global image analysis .

  13. Multifluorescence 2D gel imaging and image analysis.

    PubMed

    Vormbrock, Ingo; Hartwig, Sonja; Lehr, Stefan

    2012-01-01

    Although image acquisition and analysis are crucial steps within the multifluorescence two-dimensional gel electrophoresis workflow, some basics are frequently not carried out with the necessary diligence. This chapter should help to prevent easily avoidable failures during imaging and image preparation for comparative protein analysis.

  14. UV imaging in pharmaceutical analysis.

    PubMed

    Østergaard, Jesper

    2017-08-01

    UV imaging provides spatially and temporally resolved absorbance measurements, which are highly useful in pharmaceutical analysis. Commercial UV imaging instrumentation was originally developed as a detector for separation sciences, but the main use is in the area of in vitro dissolution and release testing studies. The review covers the basic principles of the technology and summarizes the main applications in relation to intrinsic dissolution rate determination, excipient compatibility studies and in vitro release characterization of drug substances and vehicles intended for parenteral administration. UV imaging has potential for providing new insights to drug dissolution and release processes in formulation development by real-time monitoring of swelling, precipitation, diffusion and partitioning phenomena. Limitations of current instrumentation are discussed and a perspective to new developments and opportunities given as new instrumentation is emerging. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. A computational image analysis glossary for biologists.

    PubMed

    Roeder, Adrienne H K; Cunha, Alexandre; Burl, Michael C; Meyerowitz, Elliot M

    2012-09-01

    Recent advances in biological imaging have resulted in an explosion in the quality and quantity of images obtained in a digital format. Developmental biologists are increasingly acquiring beautiful and complex images, thus creating vast image datasets. In the past, patterns in image data have been detected by the human eye. Larger datasets, however, necessitate high-throughput objective analysis tools to computationally extract quantitative information from the images. These tools have been developed in collaborations between biologists, computer scientists, mathematicians and physicists. In this Primer we present a glossary of image analysis terms to aid biologists and briefly discuss the importance of robust image analysis in developmental studies.

  16. Image analysis in medical imaging: recent advances in selected examples

    PubMed Central

    Dougherty, G

    2010-01-01

    Medical imaging has developed into one of the most important fields within scientific imaging due to the rapid and continuing progress in computerised medical image visualisation and advances in analysis methods and computer-aided diagnosis. Several research applications are selected to illustrate the advances in image analysis algorithms and visualisation. Recent results, including previously unpublished data, are presented to illustrate the challenges and ongoing developments. PMID:21611048

  17. Uncooled thermal imaging and image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Shiyun; Chang, Benkang; Yu, Chunyu; Zhang, Junju; Sun, Lianjun

    2006-09-01

    Thermal imager can transfer difference of temperature to difference of electric signal level, so can be application to medical treatment such as estimation of blood flow speed and vessel 1ocation [1], assess pain [2] and so on. With the technology of un-cooled focal plane array (UFPA) is grown up more and more, some simple medical function can be completed with un-cooled thermal imager, for example, quick warning for fever heat with SARS. It is required that performance of imaging is stabilization and spatial and temperature resolution is high enough. In all performance parameters, noise equivalent temperature difference (NETD) is often used as the criterion of universal performance. 320 x 240 α-Si micro-bolometer UFPA has been applied widely presently for its steady performance and sensitive responsibility. In this paper, NETD of UFPA and the relation between NETD and temperature are researched. several vital parameters that can affect NETD are listed and an universal formula is presented. Last, the images from the kind of thermal imager are analyzed based on the purpose of detection persons with fever heat. An applied thermal image intensification method is introduced.

  18. Quantitative multi-image analysis for biomedical Raman spectroscopic imaging.

    PubMed

    Hedegaard, Martin A B; Bergholt, Mads S; Stevens, Molly M

    2016-05-01

    Imaging by Raman spectroscopy enables unparalleled label-free insights into cell and tissue composition at the molecular level. With established approaches limited to single image analysis, there are currently no general guidelines or consensus on how to quantify biochemical components across multiple Raman images. Here, we describe a broadly applicable methodology for the combination of multiple Raman images into a single image for analysis. This is achieved by removing image specific background interference, unfolding the series of Raman images into a single dataset, and normalisation of each Raman spectrum to render comparable Raman images. Multivariate image analysis is finally applied to derive the contributing 'pure' biochemical spectra for relative quantification. We present our methodology using four independently measured Raman images of control cells and four images of cells treated with strontium ions from substituted bioactive glass. We show that the relative biochemical distribution per area of the cells can be quantified. In addition, using k-means clustering, we are able to discriminate between the two cell types over multiple Raman images. This study shows a streamlined quantitative multi-image analysis tool for improving cell/tissue characterisation and opens new avenues in biomedical Raman spectroscopic imaging. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Imaging analysis of LDEF craters

    NASA Technical Reports Server (NTRS)

    Radicatidibrozolo, F.; Harris, D. W.; Chakel, J. A.; Fleming, R. H.; Bunch, T. E.

    1991-01-01

    Two small craters in Al from the Long Duration Exposure Facility (LDEF) experiment tray A11E00F (no. 74, 119 micron diameter and no. 31, 158 micron diameter) were analyzed using Auger electron spectroscopy (AES), time-of-flight secondary ion mass spectroscopy (TOF-SIMS), low voltage scanning electron microscopy (LVSEM), and SEM energy dispersive spectroscopy (EDS). High resolution images and sensitive elemental and molecular analysis were obtained with this combined approach. The result of these analyses are presented.

  20. Planning applications in image analysis

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  1. Grid computing in image analysis.

    PubMed

    Kayser, Klaus; Görtler, Jürgen; Borkenfeld, Stephan; Kayser, Gian

    2011-01-01

    Diagnostic surgical pathology or tissue–based diagnosis still remains the most reliable and specific diagnostic medical procedure. The development of whole slide scanners permits the creation of virtual slides and to work on so-called virtual microscopes. In addition to interactive work on virtual slides approaches have been reported that introduce automated virtual microscopy, which is composed of several tools focusing on quite different tasks. These include evaluation of image quality and image standardization, analysis of potential useful thresholds for object detection and identification (segmentation), dynamic segmentation procedures, adjustable magnification to optimize feature extraction, and texture analysis including image transformation and evaluation of elementary primitives. Grid technology seems to possess all features to efficiently target and control the specific tasks of image information and detection in order to obtain a detailed and accurate diagnosis. Grid technology is based upon so-called nodes that are linked together and share certain communication rules in using open standards. Their number and functionality can vary according to the needs of a specific user at a given point in time. When implementing automated virtual microscopy with Grid technology, all of the five different Grid functions have to be taken into account, namely 1) computation services, 2) data services, 3) application services, 4) information services, and 5) knowledge services. Although all mandatory tools of automated virtual microscopy can be implemented in a closed or standardized open system, Grid technology offers a new dimension to acquire, detect, classify, and distribute medical image information, and to assure quality in tissue–based diagnosis.

  2. Determining optimal medical image compression: psychometric and image distortion analysis

    PubMed Central

    2012-01-01

    Background Storage issues and bandwidth over networks have led to a need to optimally compress medical imaging files while leaving clinical image quality uncompromised. Methods To determine the range of clinically acceptable medical image compression across multiple modalities (CT, MR, and XR), we performed psychometric analysis of image distortion thresholds using physician readers and also performed subtraction analysis of medical image distortion by varying degrees of compression. Results When physician readers were asked to determine the threshold of compression beyond which images were clinically compromised, the mean image distortion threshold was a JPEG Q value of 23.1 ± 7.0. In Receiver-Operator Characteristics (ROC) plot analysis, compressed images could not be reliably distinguished from original images at any compression level between Q = 50 and Q = 95. Below this range, some readers were able to discriminate the compressed and original images, but high sensitivity and specificity for this discrimination was only encountered at the lowest JPEG Q value tested (Q = 5). Analysis of directly measured magnitude of image distortion from subtracted image pairs showed that the relationship between JPEG Q value and degree of image distortion underwent an upward inflection in the region of the two thresholds determined psychometrically (approximately Q = 25 to Q = 50), with 75 % of the image distortion occurring between Q = 50 and Q = 1. Conclusion It is possible to apply lossy JPEG compression to medical images without compromise of clinical image quality. Modest degrees of compression, with a JPEG Q value of 50 or higher (corresponding approximately to a compression ratio of 15:1 or less), can be applied to medical images while leaving the images indistinguishable from the original. PMID:22849336

  3. Determining optimal medical image compression: psychometric and image distortion analysis.

    PubMed

    Flint, Alexander C

    2012-07-31

    Storage issues and bandwidth over networks have led to a need to optimally compress medical imaging files while leaving clinical image quality uncompromised. To determine the range of clinically acceptable medical image compression across multiple modalities (CT, MR, and XR), we performed psychometric analysis of image distortion thresholds using physician readers and also performed subtraction analysis of medical image distortion by varying degrees of compression. When physician readers were asked to determine the threshold of compression beyond which images were clinically compromised, the mean image distortion threshold was a JPEG Q value of 23.1 ± 7.0. In Receiver-Operator Characteristics (ROC) plot analysis, compressed images could not be reliably distinguished from original images at any compression level between Q = 50 and Q = 95. Below this range, some readers were able to discriminate the compressed and original images, but high sensitivity and specificity for this discrimination was only encountered at the lowest JPEG Q value tested (Q = 5). Analysis of directly measured magnitude of image distortion from subtracted image pairs showed that the relationship between JPEG Q value and degree of image distortion underwent an upward inflection in the region of the two thresholds determined psychometrically (approximately Q = 25 to Q = 50), with 75 % of the image distortion occurring between Q = 50 and Q = 1. It is possible to apply lossy JPEG compression to medical images without compromise of clinical image quality. Modest degrees of compression, with a JPEG Q value of 50 or higher (corresponding approximately to a compression ratio of 15:1 or less), can be applied to medical images while leaving the images indistinguishable from the original.

  4. Automated image analysis of uterine cervical images

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  5. Multispectral Image Analysis of Hurricane Gilbert

    DTIC Science & Technology

    1989-05-19

    Classification) Multispectral Image Analysis of Hurrican Gilbert (unclassified) 12. PERSONAL AUTHOR(S) Kleespies, Thomas J. (GL/LYS) 13a. TYPE OF REPORT...cloud top height. component, of tle image in the red channel, and similarly for the green and blue channels. Multispectral Muti.pectral image analysis can...However, there seems to be few references to the human range of vision, the selection as to which mllti.pp.tral image analysis of scenes or

  6. Automated Microarray Image Analysis Toolbox for MATLAB

    SciTech Connect

    White, Amanda M.; Daly, Don S.; Willse, Alan R.; Protic, Miroslava; Chandler, Darrell P.

    2005-09-01

    The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.

  7. Statistical analysis of biophoton image

    NASA Astrophysics Data System (ADS)

    Wang, Susheng

    1998-08-01

    A photon count image system has been developed to obtain the ultra-weak bioluminescence image. The photon images of some plant, animal and human hand have been detected. The biophoton image is different from usual image. In this paper three characteristics of biophoton image are analyzed. On the basis of these characteristics the detected probability and detected limit of photon count image system, detected limit of biophoton image have been discussed. These researches provide scientific basis for experiments design and photon image processing.

  8. Ultrasonic image analysis and image-guided interventions

    PubMed Central

    Noble, J. Alison; Navab, Nassir; Becher, H.

    2011-01-01

    The fields of medical image analysis and computer-aided interventions deal with reducing the large volume of digital images (X-ray, computed tomography, magnetic resonance imaging (MRI), positron emission tomography and ultrasound (US)) to more meaningful clinical information using software algorithms. US is a core imaging modality employed in these areas, both in its own right and used in conjunction with the other imaging modalities. It is receiving increased interest owing to the recent introduction of three-dimensional US, significant improvements in US image quality, and better understanding of how to design algorithms which exploit the unique strengths and properties of this real-time imaging modality. This article reviews the current state of art in US image analysis and its application in image-guided interventions. The article concludes by giving a perspective from clinical cardiology which is one of the most advanced areas of clinical application of US image analysis and describing some probable future trends in this important area of ultrasonic imaging research. PMID:22866237

  9. Principles and clinical applications of image analysis.

    PubMed

    Kisner, H J

    1988-12-01

    Image processing has traveled to the lunar surface and back, finding its way into the clinical laboratory. Advances in digital computers have improved the technology of image analysis, resulting in a wide variety of medical applications. Offering improvements in turnaround time, standardized systems, increased precision, and walkaway automation, digital image analysis has likely found a permanent home as a diagnostic aid in the interpretation of microscopic as well as macroscopic laboratory images.

  10. FFDM image quality assessment using computerized image texture analysis

    NASA Astrophysics Data System (ADS)

    Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina

    2010-04-01

    Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.

  11. Image analysis: a consumer's guide.

    PubMed

    Meyer, F

    1983-01-01

    The last years have seen an explosion of systems in image analysis. It is hard for the pathologist or the cytologist to make the right choice of equipment. All machines are stupid, and the only valuable thing is the human work put into it. So make your benefit of the work other people have done for you. Chose a method largely used on many systems and which has proved fertile in many domains and not only for your specific to day's application: Mathematical Morphology, to which are to be added the linear convolutions present on all machines is a strong candidate for becoming such a method. The paper illustrates a working day of an ideal system: research and diagnostic directed work during the working hours, automatic screening of cervical (or other) smears during night.

  12. Spreadsheet-like image analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Paul

    1992-08-01

    This report describes the design of a new software system being built by the Army to support and augment automated nondestructive inspection (NDI) on-line equipment implemented by the Army for detection of defective manufactured items. The new system recalls and post-processes (off-line) the NDI data sets archived by the on-line equipment for the purpose of verifying the correctness of the inspection analysis paradigms, of developing better analysis paradigms and to gather statistics on the defects of the items inspected. The design of the system is similar to that of a spreadsheet, i.e., an array of cells which may be programmed to contain functions with arguments being data from other cells and whose resultant is the output of that cell's function. Unlike a spreadsheet, the arguments and the resultants of a cell may be a matrix such as a two-dimensional matrix of picture elements (pixels). Functions include matrix mathematics, neural networks and image processing as well as those ordinarily found in spreadsheets. The system employs all of the common environmental supports of the Macintosh computer, which is the hardware platform. The system allows the resultant of a cell to be displayed in any of multiple formats such as a matrix of numbers, text, an image, or a chart. Each cell is a window onto the resultant. Like a spreadsheet if the input value of any cell is changed its effect is cascaded into the resultants of all cells whose functions use that value directly or indirectly. The system encourages the user to play what-of games, as ordinary spreadsheets do.

  13. Clock Scan Protocol for Image Analysis: ImageJ Plugins.

    PubMed

    Dobretsov, Maxim; Petkau, Georg; Hayar, Abdallah; Petkau, Eugen

    2017-06-19

    The clock scan protocol for image analysis is an efficient tool to quantify the average pixel intensity within, at the border, and outside (background) a closed or segmented convex-shaped region of interest, leading to the generation of an averaged integral radial pixel-intensity profile. This protocol was originally developed in 2006, as a visual basic 6 script, but as such, it had limited distribution. To address this problem and to join similar recent efforts by others, we converted the original clock scan protocol code into two Java-based plugins compatible with NIH-sponsored and freely available image analysis programs like ImageJ or Fiji ImageJ. Furthermore, these plugins have several new functions, further expanding the range of capabilities of the original protocol, such as analysis of multiple regions of interest and image stacks. The latter feature of the program is especially useful in applications in which it is important to determine changes related to time and location. Thus, the clock scan analysis of stacks of biological images may potentially be applied to spreading of Na(+) or Ca(++) within a single cell, as well as to the analysis of spreading activity (e.g., Ca(++) waves) in populations of synaptically-connected or gap junction-coupled cells. Here, we describe these new clock scan plugins and show some examples of their applications in image analysis.

  14. Naval Signal and Image Analysis Conference Report

    DTIC Science & Technology

    1998-02-26

    Arlington Hilton Hotel in Arlington, Virginia. The meeting was by invitation only and consisted of investigators in the ONR Signal and Image Analysis Program...in signal and image analysis . The conference provided an opportunity for technical interaction between academic researchers and Naval scientists and...plan future directions for the ONR Signal and Image Analysis Program as well as informal recommendations to the Program Officer.

  15. Image analysis applications for grain science

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Steele, James L.

    1991-02-01

    Morphometrical features of single grain kernels or particles were used to discriminate two visibly similar wheat varieties foreign material in wheat hardsoft and spring-winter wheat classes and whole from broken corn kernels. Milled fractions of hard and soft wheat were evaluated using textural image analysis. Color image analysis of sound and mold damaged corn kernels yielded high recognition rates. The studies collectively demonstrate the potential for automated classification and assessment of grain quality using image analysis.

  16. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  17. Microscopy image segmentation tool: robust image data analysis.

    PubMed

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  18. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  19. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  20. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  1. Image processing software for imaging spectrometry data analysis

    NASA Technical Reports Server (NTRS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  2. Image processing software for imaging spectrometry data analysis

    NASA Astrophysics Data System (ADS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-02-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  3. Information granules in image histogram analysis.

    PubMed

    Wieclawek, Wojciech

    2017-05-10

    A concept of granular computing employed in intensity-based image enhancement is discussed. First, a weighted granular computing idea is introduced. Then, the implementation of this term in the image processing area is presented. Finally, multidimensional granular histogram analysis is introduced. The proposed approach is dedicated to digital images, especially to medical images acquired by Computed Tomography (CT). As the histogram equalization approach, this method is based on image histogram analysis. Yet, unlike the histogram equalization technique, it works on a selected range of the pixel intensity and is controlled by two parameters. Performance is tested on anonymous clinical CT series. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Multiscale Analysis of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C. A.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is that there is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also cursed us with an increased amount of higher complexity data than previous missions. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present a preliminary analysis of multiscale techniques applied to solar image data. Specifically, we explore the use of the 2-d wavelet transform and related transforms with EIT, LASCO and TRACE images. This work was supported by NASA contract NAS5-00220.

  5. A Mathematical Framework for Image Analysis

    DTIC Science & Technology

    1991-08-01

    The results reported here were derived from the research project ’A Mathematical Framework for Image Analysis ’ supported by the Office of Naval...Research, contract N00014-88-K-0289 to Brown University. A common theme for the work reported is the use of probabilistic methods for problems in image ... analysis and image reconstruction. Five areas of research are described: rigid body recognition using a decision tree/combinatorial approach; nonrigid

  6. Image Reconstruction Using Analysis Model Prior

    PubMed Central

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  7. Merging Panchromatic and Multispectral Images for Enhanced Image Analysis

    DTIC Science & Technology

    1990-08-01

    Multispectral Images for Enhanced Image Analysis I, Curtis K. Munechika grant permission to the Wallace Memorial Library of the Rochester Institute of...0.0 ()0 (.0(%C’ trees 3. 5 2.5% 0.0%l 44. 1% 5 (.()0th ,crass .1 ().W 0.0% 0).0% 97. overall classification accuracy: 87.5%( T-able DlIb . Confusion

  8. Description, Recognition and Analysis of Biological Images

    SciTech Connect

    Yu Donggang; Jin, Jesse S.; Luo Suhuai; Pham, Tuan D.; Lai Wei

    2010-01-25

    Description, recognition and analysis biological images plays an important role for human to describe and understand the related biological information. The color images are separated by color reduction. A new and efficient linearization algorithm is introduced based on some criteria of difference chain code. A series of critical points is got based on the linearized lines. The series of curvature angle, linearity, maximum linearity, convexity, concavity and bend angle of linearized lines are calculated from the starting line to the end line along all smoothed contours. The useful method can be used for shape description and recognition. The analysis, decision, classification of the biological images are based on the description of morphological structures, color information and prior knowledge, which are associated each other. The efficiency of the algorithms is described based on two applications. One application is the description, recognition and analysis of color flower images. Another one is related to the dynamic description, recognition and analysis of cell-cycle images.

  9. Program for Analysis and Enhancement of Images

    NASA Technical Reports Server (NTRS)

    Lu, Yun-Chi

    1987-01-01

    Land Analysis System (LAS) is collection of image-analysis computer programs designed to manipulate and analyze multispectral image data. Provides user with functions ingesting various sensor data, radiometric and geometric corrections, image registration, training site selection, supervised and unsupervised classification, Fourier domain filtering, and image enhancement. Sufficiently modular and includes extensive library of subroutines to permit inclusion of new algorithmic programs. Commercial package International Mathematical & Statistical Library (IMSL) required for full implementation of LAS. Written in VAX FORTRAN 77, C, and Macro assembler for DEC VAX operating under VMS 4.0.

  10. Optical Analysis of Microscope Images

    NASA Astrophysics Data System (ADS)

    Biles, Jonathan R.

    Microscope images were analyzed with coherent and incoherent light using analog optical techniques. These techniques were found to be useful for analyzing large numbers of nonsymbolic, statistical microscope images. In the first part phase coherent transparencies having 20-100 human multiple myeloma nuclei were simultaneously photographed at 100 power magnification using high resolution holographic film developed to high contrast. An optical transform was obtained by focussing the laser onto each nuclear image and allowing the diffracted light to propagate onto a one dimensional photosensor array. This method reduced the data to the position of the first two intensity minima and the intensity of successive maxima. These values were utilized to estimate the four most important cancer detection clues of nuclear size, shape, darkness, and chromatin texture. In the second part, the geometric and holographic methods of phase incoherent optical processing were investigated for pattern recognition of real-time, diffuse microscope images. The theory and implementation of these processors was discussed in view of their mutual problems of dimness, image bias, and detector resolution. The dimness problem was solved by either using a holographic correlator or a speckle free laser microscope. The latter was built using a spinning tilted mirror which caused the speckle to change so quickly that it averaged out during the exposure. To solve the bias problem low image bias templates were generated by four techniques: microphotography of samples, creation of typical shapes by computer graphics editor, transmission holography of photoplates of samples, and by spatially coherent color image bias removal. The first of these templates was used to perform correlations with bacteria images. The aperture bias was successfully removed from the correlation with a video frame subtractor. To overcome the limited detector resolution it is necessary to discover some analog nonlinear intensity

  11. Scale-Specific Multifractal Medical Image Analysis

    PubMed Central

    Braverman, Boris

    2013-01-01

    Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value. PMID:24023588

  12. Imaging flow cytometry for phytoplankton analysis.

    PubMed

    Dashkova, Veronika; Malashenkov, Dmitry; Poulton, Nicole; Vorobjev, Ivan; Barteneva, Natasha S

    2017-01-01

    This review highlights the concepts and instrumentation of imaging flow cytometry technology and in particular its use for phytoplankton analysis. Imaging flow cytometry, a hybrid technology combining speed and statistical capabilities of flow cytometry with imaging features of microscopy, is rapidly advancing as a cell imaging platform that overcomes many of the limitations of current techniques and contributed significantly to the advancement of phytoplankton analysis in recent years. This review presents the various instrumentation relevant to the field and currently used for assessment of complex phytoplankton communities' composition and abundance, size structure determination, biovolume estimation, detection of harmful algal bloom species, evaluation of viability and metabolic activity and other applications. Also we present our data on viability and metabolic assessment of Aphanizomenon sp. cyanobacteria using Imagestream X Mark II imaging cytometer. Herein, we highlight the immense potential of imaging flow cytometry for microalgal research, but also discuss limitations and future developments.

  13. Digital Image Analysis for DETCHIP® Code Determination

    PubMed Central

    Lyon, Marcus; Wilson, Mark V.; Rouhier, Kerry A.; Symonsbergen, David J.; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E.

    2013-01-01

    DETECHIP® is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP® used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP®. Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940

  14. Materials characterization through quantitative digital image analysis

    SciTech Connect

    J. Philliber; B. Antoun; B. Somerday; N. Yang

    2000-07-01

    A digital image analysis system has been developed to allow advanced quantitative measurement of microstructural features. This capability is maintained as part of the microscopy facility at Sandia, Livermore. The system records images digitally, eliminating the use of film. Images obtained from other sources may also be imported into the system. Subsequent digital image processing enhances image appearance through the contrast and brightness adjustments. The system measures a variety of user-defined microstructural features--including area fraction, particle size and spatial distributions, grain sizes and orientations of elongated particles. These measurements are made in a semi-automatic mode through the use of macro programs and a computer controlled translation stage. A routine has been developed to create large montages of 50+ separate images. Individual image frames are matched to the nearest pixel to create seamless montages. Results from three different studies are presented to illustrate the capabilities of the system.

  15. Theory of Image Analysis and Recognition.

    DTIC Science & Technology

    1983-01-24

    Narendra Ahuja Image models Ramalingam Chellappa Image models Matti Pietikainen * Texture analysis b David G. Morgenthaler’ 3D digital geometry c Angela Y. Wu...Restoration Parameter Choice A Quantitative Guide," TR-965, October 1980. 70. Matti Pietikainen , "On the Use of Hierarchically Computed ’Mexican Hat...81. Matti Pietikainen and Azriel Rosenfeld, "Image Segmenta- tion by Texture Using Pyramid Node Linking," TR-1008, February 1981. 82. David G. 1

  16. Analysis of dynamic brain imaging data.

    PubMed Central

    Mitra, P P; Pesaran, B

    1999-01-01

    Modern imaging techniques for probing brain function, including functional magnetic resonance imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques for analysis and visualization of such imaging data to separate the signal from the noise and characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging, and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: "noise" characterization and suppression, and "signal" characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for nonstationarity in the data. Of particular note are 1) the development of a decomposition technique (space-frequency singular value decomposition) that is shown to be a useful means of characterizing the image data, and 2) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources. PMID:9929474

  17. Digital image processing in cephalometric analysis.

    PubMed

    Jäger, A; Döler, W; Schormann, T

    1989-01-01

    Digital image processing methods were applied to improve the practicability of cephalometric analysis. The individual X-ray film was digitized by the aid of a high resolution microscope-photometer. Digital processing was done using a VAX 8600 computer system. An improvement of the image quality was achieved by means of various digital enhancement and filtering techniques.

  18. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  19. Machine learning applications in cell image analysis.

    PubMed

    Kan, Andrey

    2017-04-04

    Machine learning (ML) refers to a set of automatic pattern recognition methods that have been successfully applied across various problem domains, including biomedical image analysis. This review focuses on ML applications for image analysis in light microscopy experiments with typical tasks of segmenting and tracking individual cells, and modelling of reconstructed lineage trees. After describing a typical image analysis pipeline and highlighting challenges of automatic analysis (for example, variability in cell morphology, tracking in presence of clutters) this review gives a brief historical outlook of ML, followed by basic concepts and definitions required for understanding examples. This article then presents several example applications at various image processing stages, including the use of supervised learning methods for improving cell segmentation, and the application of active learning for tracking. The review concludes with remarks on parameter setting and future directions.Immunology and Cell Biology advance online publication, 4 April 2017; doi:10.1038/icb.2017.16.

  20. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  1. A Robust Actin Filaments Image Analysis Framework.

    PubMed

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-08-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in

  2. On image analysis in fractography (Methodological Notes)

    NASA Astrophysics Data System (ADS)

    Shtremel', M. A.

    2015-10-01

    As other spheres of image analysis, fractography has no universal method for information convolution. An effective characteristic of an image is found by analyzing the essence and origin of every class of objects. As follows from the geometric definition of a fractal curve, its projection onto any straight line covers a certain segment many times; therefore, neither a time series (one-valued function of time) nor an image (one-valued function of plane) can be a fractal. For applications, multidimensional multiscale characteristics of an image are necessary. "Full" wavelet series break the law of conservation of information.

  3. Retinal image analysis: concepts, applications and potential.

    PubMed

    Patton, Niall; Aslam, Tariq M; MacGillivray, Thomas; Deary, Ian J; Dhillon, Baljean; Eikelboom, Robert H; Yogesan, Kanagasingam; Constable, Ian J

    2006-01-01

    As digital imaging and computing power increasingly develop, so too does the potential to use these technologies in ophthalmology. Image processing, analysis and computer vision techniques are increasing in prominence in all fields of medical science, and are especially pertinent to modern ophthalmology, as it is heavily dependent on visually oriented signs. The retinal microvasculature is unique in that it is the only part of the human circulation that can be directly visualised non-invasively in vivo, readily photographed and subject to digital image analysis. Exciting developments in image processing relevant to ophthalmology over the past 15 years includes the progress being made towards developing automated diagnostic systems for conditions, such as diabetic retinopathy, age-related macular degeneration and retinopathy of prematurity. These diagnostic systems offer the potential to be used in large-scale screening programs, with the potential for significant resource savings, as well as being free from observer bias and fatigue. In addition, quantitative measurements of retinal vascular topography using digital image analysis from retinal photography have been used as research tools to better understand the relationship between the retinal microvasculature and cardiovascular disease. Furthermore, advances in electronic media transmission increase the relevance of using image processing in 'teleophthalmology' as an aid in clinical decision-making, with particular relevance to large rural-based communities. In this review, we outline the principles upon which retinal digital image analysis is based. We discuss current techniques used to automatically detect landmark features of the fundus, such as the optic disc, fovea and blood vessels. We review the use of image analysis in the automated diagnosis of pathology (with particular reference to diabetic retinopathy). We also review its role in defining and performing quantitative measurements of vascular topography

  4. Multiresolution morphological analysis of document images

    NASA Astrophysics Data System (ADS)

    Bloomberg, Dan S.

    1992-11-01

    An image-based approach to document image analysis is presented, that uses shape and textural properties interchangeably at multiple scales. Image-based techniques permit a relatively small number of simple and fast operations to be used for a wide variety of analysis problems with document images. The primary binary image operations are morphological and multiresolution. The generalized opening, a morphological operation, allows extraction of image features that have both shape and textural properties, and that are not limited by properties related to image connectivity. Reduction operations are necessary due to the large number of pixels at scanning resolution, and threshold reduction is used for efficient and controllable shape and texture transformations between resolution levels. Aspects of these techniques, which include sequences of threshold reductions, are illustrated by problems such as text/halftone segmentation and word-level extraction. Both the generalized opening and these multiresolution operations are then used to identify italic and bold words in text. These operations are performed without any attempt at identification of individual characters. Their robustness derives from the aggregation of statistical properties over entire words. However, the analysis of the statistical properties is performed implicitly, in large part through nonlinear image processing operations. The approximate computational cost of the basic operations is given, and the importance of operating at the lowest feasable resolution is demonstrated.

  5. Malware analysis using visualized image matrices.

    PubMed

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  6. Edge enhanced morphology for infrared image analysis

    NASA Astrophysics Data System (ADS)

    Bai, Xiangzhi; Liu, Haonan

    2017-01-01

    Edge information is one of the critical information for infrared images. Morphological operators have been widely used for infrared image analysis. However, the edge information in infrared image is weak and the morphological operators could not well utilize the edge information of infrared images. To strengthen the edge information in morphological operators, the edge enhanced morphology is proposed in this paper. Firstly, the edge enhanced dilation and erosion operators are given and analyzed. Secondly, the pseudo operators which are derived from the edge enhanced dilation and erosion operators are defined. Finally, the applications for infrared image analysis are shown to verify the effectiveness of the proposed edge enhanced morphological operators. The proposed edge enhanced morphological operators are useful for the applications related to edge features, which could be extended to wide area of applications.

  7. Image Analysis of the Tumor Microenvironment.

    PubMed

    Lloyd, Mark C; Johnson, Joseph O; Kasprzak, Agnieszka; Bui, Marilyn M

    2016-01-01

    In the field of pathology it is clear that molecular genomics and digital imaging represent two promising future directions, and both are as relevant to the tumor microenvironment as they are to the tumor itself (Beck AH et al. Sci Transl Med 3(108):108ra113-08ra113, 2011). Digital imaging, or whole slide imaging (WSI), of glass histology slides facilitates a number of value-added competencies which were not previously possible with the traditional analog review of these slides under a microscope by a pathologist. As an important tool for investigational research, digital pathology can leverage the quantification and reproducibility offered by image analysis to add value to the pathology field. This chapter will focus on the application of image analysis to investigate the tumor microenvironment and how quantitative investigation can provide deeper insight into our understanding of the tumor to tumor microenvironment relationship.

  8. Topological image texture analysis for quality assessment

    NASA Astrophysics Data System (ADS)

    Asaad, Aras T.; Rashid, Rasber Dh.; Jassim, Sabah A.

    2017-05-01

    Image quality is a major factor influencing pattern recognition accuracy and help detect image tampering for forensics. We are concerned with investigating topological image texture analysis techniques to assess different type of degradation. We use Local Binary Pattern (LBP) as a texture feature descriptor. For any image construct simplicial complexes for selected groups of uniform LBP bins and calculate persistent homology invariants (e.g. number of connected components). We investigated image quality discriminating characteristics of these simplicial complexes by computing these models for a large dataset of face images that are affected by the presence of shadows as a result of variation in illumination conditions. Our tests demonstrate that for specific uniform LBP patterns, the number of connected component not only distinguish between different levels of shadow effects but also help detect the infected regions as well.

  9. Image texture analysis of crushed wheat kernels

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.

    1992-03-01

    The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.

  10. Single-image molecular analysis for accelerated fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Wang, Yan Mei

    2011-03-01

    We have developed a new single-molecule fluorescence imaging analysis method, SIMA, to improve the temporal resolution of single-molecule localization and tracking studies to millisecond timescales without compromising the nanometer range spatial resolution [1,2]. In this method, the width of the fluorescence intensity profile of a static or mobile molecule, imaged using submillisecond to milliseconds exposure time, is used for localization and dynamics analysis. We apply this method to three single-molecule studies: (1) subdiffraction molecular separation measurements, (2) axial localization precision measurements, and (3) protein diffusion coefficient measurements in free solution. Applications of SIMA in flagella IFT particle analysis, localizations of UgtP (a cell division regulator protein) in live cells, and diffusion coefficient measurement of LacI in vitro and in vivo will be discussed.

  11. Hybrid Expert Systems In Image Analysis

    NASA Astrophysics Data System (ADS)

    Dixon, Mark J.; Gregory, Paul J.

    1987-04-01

    Vision systems capable of inspecting industrial components and assemblies have a large potential market if they can be easily programmed and produced quickly. Currently, vision application software written in conventional high-level languages such as C or Pascal are produced by experts in program design, image analysis, and process control. Applications written this way are difficult to maintain and modify. Unless other similar inspection problems can be found, the final program is essentially one-off redundant code. A general-purpose vision system targeted for the Visual Machines Ltd. C-VAS 3000 image processing workstation, is described which will make writing image analysis software accessible to the non-expert both in programming computers and image analysis. A significant reduction in the effort required to produce vision systems, will be gained through a graphically-driven interactive application generator. Finally, an Expert System will be layered on top to guide the naive user through the process of generating an application.

  12. Image analysis in comparative genomic hybridization

    SciTech Connect

    Lundsteen, C.; Maahr, J.; Christensen, B.

    1995-01-01

    Comparative genomic hybridization (CGH) is a new technique by which genomic imbalances can be detected by combining in situ suppression hybridization of whole genomic DNA and image analysis. We have developed software for rapid, quantitative CGH image analysis by a modification and extension of the standard software used for routine karyotyping of G-banded metaphase spreads in the Magiscan chromosome analysis system. The DAPI-counterstained metaphase spread is karyotyped interactively. Corrections for image shifts between the DAPI, FITC, and TRITC images are done manually by moving the three images relative to each other. The fluorescence background is subtracted. A mean filter is applied to smooth the FITC and TRITC images before the fluorescence ratio between the individual FITC and TRITC-stained chromosomes is computed pixel by pixel inside the area of the chromosomes determined by the DAPI boundaries. Fluorescence intensity ratio profiles are generated, and peaks and valleys indicating possible gains and losses of test DNA are marked if they exceed ratios below 0.75 and above 1.25. By combining the analysis of several metaphase spreads, consistent findings of gains and losses in all or almost all spreads indicate chromosomal imbalance. Chromosomal imbalances are detected either by visual inspection of fluorescence ratio (FR) profiles or by a statistical approach that compares FR measurements of the individual case with measurements of normal chromosomes. The complete analysis of one metaphase can be carried out in approximately 10 minutes. 8 refs., 7 figs., 1 tab.

  13. Retinal imaging analysis based on vessel detection.

    PubMed

    Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila

    2017-03-13

    With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art.

  14. MRI Image Processing Based on Fractal Analysis

    PubMed

    Marusina, Mariya Y; Mochalina, Alexandra P; Frolova, Ekaterina P; Satikov, Valentin I; Barchuk, Anton A; Kuznetcov, Vladimir I; Gaidukov, Vadim S; Tarakanov, Segrey A

    2017-01-01

    Background: Cancer is one of the most common causes of human mortality, with about 14 million new cases and 8.2 million deaths reported in in 2012. Early diagnosis of cancer through screening allows interventions to reduce mortality. Fractal analysis of medical images may be useful for this purpose. Materials and Methods: In this study, we examined magnetic resonance (MR) images of healthy livers and livers containing metastases from colorectal cancer. The fractal dimension and the Hurst exponent were chosen as diagnostic features for tomographic imaging using Image J software package for image processings FracLac for applied for fractal analysis with a 120x150 pixel area. Calculations of the fractal dimensions of pathological and healthy tissue samples were performed using the box-counting method. Results: In pathological cases (foci formation), the Hurst exponent was less than 0.5 (the region of unstable statistical characteristics). For healthy tissue, the Hurst index is greater than 0.5 (the zone of stable characteristics). Conclusions: The study indicated the possibility of employing fractal rapid analysis for the detection of focal lesions of the liver. The Hurst exponent can be used as an important diagnostic characteristic for analysis of medical images.

  15. Knowledge based imaging for terrain analysis

    NASA Technical Reports Server (NTRS)

    Holben, Rick; Westrom, George; Rossman, David; Kurrasch, Ellie

    1992-01-01

    A planetary rover will have various vision based requirements for navigation, terrain characterization, and geological sample analysis. In this paper we describe a knowledge-based controller and sensor development system for terrain analysis. The sensor system consists of a laser ranger and a CCD camera. The controller, under the input of high-level commands, performs such functions as multisensor data gathering, data quality monitoring, and automatic extraction of sample images meeting various criteria. In addition to large scale terrain analysis, the system's ability to extract useful geological information from rock samples is illustrated. Image and data compression strategies are also discussed in light of the requirements of earth bound investigators.

  16. Rock fracture image acquisition and analysis

    NASA Astrophysics Data System (ADS)

    Wang, W.; Zongpu, Jia; Chen, Liwan

    2007-12-01

    As a cooperation project between Sweden and China, this paper presents: rock fracture image acquisition and analysis. Rock fracture images are acquired by using UV light illumination and visible optical illumination. To present fracture network reasonable, we set up some models to characterize the network, based on the models, we used Best fit Ferret method to auto-determine fracture zone, then, through skeleton fractures to obtain endpoints, junctions, holes, particles, and branches. Based on the new parameters and a part of common parameters, the fracture network density, porosity, connectivity and complexities can be obtained, and the fracture network is characterized. In the following, we first present a basic consideration and basic parameters for fractures (Primary study of characteristics of rock fractures), then, set up a model for fracture network analysis (Fracture network analysis), consequently to use the model to analyze fracture network with different images (Two dimensional fracture network analysis based on slices), and finally give conclusions and suggestions.

  17. Quantitative analysis of qualitative images

    NASA Astrophysics Data System (ADS)

    Hockney, David; Falco, Charles M.

    2005-03-01

    We show optical evidence that demonstrates artists as early as Jan van Eyck and Robert Campin (c1425) used optical projections as aids for producing their paintings. We also have found optical evidence within works by later artists, including Bermejo (c1475), Lotto (c1525), Caravaggio (c1600), de la Tour (c1650), Chardin (c1750) and Ingres (c1825), demonstrating a continuum in the use of optical projections by artists, along with an evolution in the sophistication of that use. However, even for paintings where we have been able to extract unambiguous, quantitative evidence of the direct use of optical projections for producing certain of the features, this does not mean that paintings are effectively photographs. Because the hand and mind of the artist are intimately involved in the creation process, understanding these complex images requires more than can be obtained from only applying the equations of geometrical optics.

  18. Deep Learning in Medical Image Analysis

    PubMed Central

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2016-01-01

    The computer-assisted analysis for better interpreting images have been longstanding issues in the medical imaging field. On the image-understanding front, recent advances in machine learning, especially, in the way of deep learning, have made a big leap to help identify, classify, and quantify patterns in medical images. Specifically, exploiting hierarchical feature representations learned solely from data, instead of handcrafted features mostly designed based on domain-specific knowledge, lies at the core of the advances. In that way, deep learning is rapidly proving to be the state-of-the-art foundation, achieving enhanced performances in various medical applications. In this article, we introduce the fundamentals of deep learning methods; review their successes to image registration, anatomical/cell structures detection, tissue segmentation, computer-aided disease diagnosis or prognosis, and so on. We conclude by raising research issues and suggesting future directions for further improvements. PMID:28301734

  19. Single particle raster image analysis of diffusion.

    PubMed

    Longfils, M; Schuster, E; Lorén, N; Särkkä, A; Rudemo, M

    2017-04-01

    As a complement to the standard RICS method of analysing Raster Image Correlation Spectroscopy images with estimation of the image correlation function, we introduce the method SPRIA, Single Particle Raster Image Analysis. Here, we start by identifying individual particles and estimate the diffusion coefficient for each particle by a maximum likelihood method. Averaging over the particles gives a diffusion coefficient estimate for the whole image. In examples both with simulated and experimental data, we show that the new method gives accurate estimates. It also gives directly standard error estimates. The method should be possible to extend to study heterogeneous materials and systems of particles with varying diffusion coefficient, as demonstrated in a simple simulation example. A requirement for applying the SPRIA method is that the particle concentration is low enough so that we can identify the individual particles. We also describe a bootstrap method for estimating the standard error of standard RICS.

  20. Particle Pollution Estimation Based on Image Analysis.

    PubMed

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction.

  1. Image Processing for Galaxy Ellipticity Analysis

    NASA Astrophysics Data System (ADS)

    Stankus, Paul

    2015-04-01

    Shape analysis of statistically large samples of galaxy images can be used to reveal the imprint of weak gravitational lensing by dark matter distributions. As new, large-scale surveys expand the potential catalog, galaxy shape analysis suffers the (coupled) problems of high noise and uncertainty in the prior morphology. We investigate a new image processing technique to help mitigate these problems, in which repeated auto-correlations and auto-convolutions are employed to push the true shape toward a universal (Gaussian) attractor while relatively suppressing uncorrelated pixel noise. The goal is reliable reconstruction of original image moments, independent of image shape. First test evaluations of the technique on small control samples will be presented, and future applicability discussed. Supported by the US-DOE.

  2. Particle Pollution Estimation Based on Image Analysis

    PubMed Central

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  3. Functional data analysis in brain imaging studies.

    PubMed

    Tian, Tian Siva

    2010-01-01

    Functional data analysis (FDA) considers the continuity of the curves or functions, and is a topic of increasing interest in the statistics community. FDA is commonly applied to time-series and spatial-series studies. The development of functional brain imaging techniques in recent years made it possible to study the relationship between brain and mind over time. Consequently, an enormous amount of functional data is collected and needs to be analyzed. Functional techniques designed for these data are in strong demand. This paper discusses three statistically challenging problems utilizing FDA techniques in functional brain imaging analysis. These problems are dimension reduction (or feature extraction), spatial classification in functional magnetic resonance imaging studies, and the inverse problem in magneto-encephalography studies. The application of FDA to these issues is relatively new but has been shown to be considerably effective. Future efforts can further explore the potential of FDA in functional brain imaging studies.

  4. Integral-geometry morphological image analysis

    NASA Astrophysics Data System (ADS)

    Michielsen, K.; De Raedt, H.

    2001-07-01

    This paper reviews a general method to characterize the morphology of two- and three-dimensional patterns in terms of geometrical and topological descriptors. Based on concepts of integral geometry, it involves the calculation of the Minkowski functionals of black-and-white images representing the patterns. The result of this approach is an objective, numerical characterization of a given pattern. We briefly review the basic elements of morphological image processing, a technique to transform images to patterns that are amenable to further morphological image analysis. The image processing technique is applied to electron microscope images of nano-ceramic particles and metal-oxide precipitates. The emphasis of this review is on the practical aspects of the integral-geometry-based morphological image analysis but we discuss its mathematical foundations as well. Applications to simple lattice structures, triply periodic minimal surfaces, and the Klein bottle serve to illustrate the basic steps of the approach. More advanced applications include random point sets, percolation and complex structures found in block copolymers.

  5. Data analysis for GOPEX image frames

    NASA Technical Reports Server (NTRS)

    Levine, B. M.; Shaik, K. S.; Yan, T.-Y.

    1993-01-01

    The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.

  6. Chromatic Image Analysis For Quantitative Thermal Mapping

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1995-01-01

    Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.

  7. Advanced automated char image analysis techniques

    SciTech Connect

    Tao Wu; Edward Lester; Michael Cloke

    2006-05-15

    Char morphology is an important characteristic when attempting to understand coal behavior and coal burnout. In this study, an augmented algorithm has been proposed to identify char types using image analysis. On the basis of a series of image processing steps, a char image is singled out from the whole image, which then allows the important major features of the char particle to be measured, including size, porosity, and wall thickness. The techniques for automated char image analysis have been tested against char images taken from ICCP Char Atlas as well as actual char particles derived from pyrolyzed char samples. Thirty different chars were prepared in a drop tube furnace operating at 1300{sup o}C, 1% oxygen, and 100 ms from 15 different world coals sieved into two size fractions (53-75 and 106-125 {mu}m). The results from this automated technique are comparable with those from manual analysis, and the additional detail from the automated sytem has potential use in applications such as combustion modeling systems. Obtaining highly detailed char information with automated methods has traditionally been hampered by the difficulty of automatic recognition of individual char particles. 20 refs., 10 figs., 3 tabs.

  8. Computer assisted analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Sawicki, M.; Munhutu, P.; DaPonte, J.; Caragianis-Broadbridge, C.; Lehman, A.; Sadowski, T.; Garcia, E.; Heyden, C.; Mirabelle, L.; Benjamin, P.

    2009-01-01

    The use of Transmission Electron Microscopy (TEM) to characterize the microstructure of a material continues to grow in importance as technological advancements become increasingly more dependent on nanotechnology1 . Since nanoparticle properties such as size (diameter) and size distribution are often important in determining potential applications, a particle analysis is often performed on TEM images. Traditionally done manually, this has the potential to be labor intensive, time consuming, and subjective2. To resolve these issues, automated particle analysis routines are becoming more widely accepted within the community3. When using such programs, it is important to compare their performance, in terms of functionality and cost. The primary goal of this study was to apply one such software package, ImageJ to grayscale TEM images of nanoparticles with known size. A secondary goal was to compare this popular open-source general purpose image processing program to two commercial software packages. After a brief investigation of performance and price, ImageJ was identified as the software best suited for the particle analysis conducted in the study. While many ImageJ functions were used, the ability to break agglomerations that occur in specimen preparation into separate particles using a watershed algorithm was particularly helpful4.

  9. VAICo: visual analysis for image comparison.

    PubMed

    Schmidt, Johanna; Gröller, M Eduard; Bruckner, Stefan

    2013-12-01

    Scientists, engineers, and analysts are confronted with ever larger and more complex sets of data, whose analysis poses special challenges. In many situations it is necessary to compare two or more datasets. Hence there is a need for comparative visualization tools to help analyze differences or similarities among datasets. In this paper an approach for comparative visualization for sets of images is presented. Well-established techniques for comparing images frequently place them side-by-side. A major drawback of such approaches is that they do not scale well. Other image comparison methods encode differences in images by abstract parameters like color. In this case information about the underlying image data gets lost. This paper introduces a new method for visualizing differences and similarities in large sets of images which preserves contextual information, but also allows the detailed analysis of subtle variations. Our approach identifies local changes and applies cluster analysis techniques to embed them in a hierarchy. The results of this process are then presented in an interactive web application which allows users to rapidly explore the space of differences and drill-down on particular features. We demonstrate the flexibility of our approach by applying it to multiple distinct domains.

  10. On Two-Dimensional ARMA Models for Image Analysis.

    DTIC Science & Technology

    1980-03-24

    2-D ARMA models for image analysis . Particular emphasis is placed on restoration of noisy images using 2-D ARMA models. Computer results are...is concluded that the models are very effective linear models for image analysis . (Author)

  11. Selecting an image analysis minicomputer system

    NASA Technical Reports Server (NTRS)

    Danielson, R.

    1981-01-01

    Factors to be weighed when selecting a minicomputer system as the basis for an image analysis computer facility vary depending on whether the user organization procures a new computer or selects an existing facility to serve as an image analysis host. Some conditions not directly related to hardware or software should be considered such as the flexibility of the computer center staff, their encouragement of innovation, and the availability of the host processor to a broad spectrum of potential user organizations. Particular attention must be given to: image analysis software capability; the facilities of a potential host installation; the central processing unit; the operating system and languages; main memory; disk storage; tape drives; hardcopy output; and other peripherals. The operational environment, accessibility; resource limitations; and operational supports are important. Charges made for program execution and data storage must also be examined.

  12. Image analysis of insulation mineral fibres.

    PubMed

    Talbot, H; Lee, T; Jeulin, D; Hanton, D; Hobbs, L W

    2000-12-01

    We present two methods for measuring the diameter and length of man-made vitreous fibres based on the automated image analysis of scanning electron microscopy images. The fibres we want to measure are used in materials such as glass wool, which in turn are used for thermal and acoustic insulation. The measurement of the diameters and lengths of these fibres is used by the glass wool industry for quality control purposes. To obtain reliable quality estimators, the measurement of several hundred images is necessary. These measurements are usually obtained manually by operators. Manual measurements, although reliable when performed by skilled operators, are slow due to the need for the operators to rest often to retain their ability to spot faint fibres on noisy backgrounds. Moreover, the task of measuring thousands of fibres every day, even with the help of semi-automated image analysis systems, is dull and repetitive. The need for an automated procedure which could replace manual measurements is quite real. For each of the two methods that we propose to accomplish this task, we present the sample preparation, the microscope setting and the image analysis algorithms used for the segmentation of the fibres and for their measurement. We also show how a statistical analysis of the results can alleviate most measurement biases, and how we can estimate the true distribution of fibre lengths by diameter class by measuring only the lengths of the fibres visible in the field of view.

  13. From Image Analysis to Computer Vision: Motives, Methods, and Milestones.

    DTIC Science & Technology

    1998-07-01

    images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision

  14. Automated eXpert Spectral Image Analysis

    SciTech Connect

    Keenan, Michael R.

    2003-11-25

    AXSIA performs automated factor analysis of hyperspectral images. In such images, a complete spectrum is collected an each point in a 1-, 2- or 3- dimensional spatial array. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful information. Multivariate factor analysis techniques have proven effective for extracting the essential information from high dimensional data sets into a limted number of factors that describe the spectral characteristics and spatial distributions of the pure components comprising the sample. AXSIA provides tools to estimate different types of factor models including Singular Value Decomposition (SVD), Principal Component Analysis (PCA), PCA with factor rotation, and Alternating Least Squares-based Multivariate Curve Resolution (MCR-ALS). As part of the analysis process, AXSIA can automatically estimate the number of pure components that comprise the data and can scale the data to account for Poisson noise. The data analysis methods are fundamentally based on eigenanalysis of the data crossproduct matrix coupled with orthogonal eigenvector rotation and constrained alternating least squares refinement. A novel method for automatically determining the number of significant components, which is based on the eigenvalues of the crossproduct matrix, has also been devised and implemented. The data can be compressed spectrally via PCA and spatially through wavelet transforms, and algorithms have been developed that perform factor analysis in the transform domain while retaining full spatial and spectral resolution in the final result. These latter innovations enable the analysis of larger-than core-memory spectrum-images. AXSIA was designed to perform automated chemical phase analysis of spectrum-images acquired by a variety of chemical imaging techniques. Successful applications include Energy Dispersive X-ray Spectroscopy, X-ray Fluorescence

  15. Objective facial photograph analysis using imaging software.

    PubMed

    Pham, Annette M; Tollefson, Travis T

    2010-05-01

    Facial analysis is an integral part of the surgical planning process. Clinical photography has long been an invaluable tool in the surgeon's practice not only for accurate facial analysis but also for enhancing communication between the patient and surgeon, for evaluating postoperative results, for medicolegal documentation, and for educational and teaching opportunities. From 35-mm slide film to the digital technology of today, clinical photography has benefited greatly from technological advances. With the development of computer imaging software, objective facial analysis becomes easier to perform and less time consuming. Thus, while the original purpose of facial analysis remains the same, the process becomes much more efficient and allows for some objectivity. Although clinical judgment and artistry of technique is never compromised, the ability to perform objective facial photograph analysis using imaging software may become the standard in facial plastic surgery practices in the future.

  16. Motion Analysis From Television Images

    NASA Astrophysics Data System (ADS)

    Silberberg, George G.; Keller, Patrick N.

    1982-02-01

    The Department of Defense ranges have relied on photographic instrumentation for gathering data of firings for all types of ordnance. A large inventory of cameras are available on the market that can be used for these tasks. A new set of optical instrumentation is beginning to appear which, in many cases, can directly replace photographic cameras for a great deal of the work being performed now. These are television cameras modified so they can stop motion, see in the dark, perform under hostile environments, and provide real time information. This paper discusses techniques for modifying television cameras so they can be used for motion analysis.

  17. Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images

    DTIC Science & Technology

    2007-11-02

    Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for

  18. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Deep Learning in Medical Image Analysis.

    PubMed

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-03-09

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement. Expected final online publication date for the Annual Review of Biomedical Engineering Volume 19 is June 4, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

  20. Fourier analysis: from cloaking to imaging

    NASA Astrophysics Data System (ADS)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  1. Curvelet Based Offline Analysis of SEM Images

    PubMed Central

    Shirazi, Syed Hamad; Haq, Nuhman ul; Hayat, Khizar; Naz, Saeeda; Haque, Ihsan ul

    2014-01-01

    Manual offline analysis, of a scanning electron microscopy (SEM) image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM). The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD) calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm. PMID:25089617

  2. Measuring toothbrush interproximal penetration using image analysis

    NASA Astrophysics Data System (ADS)

    Hayworth, Mark S.; Lyons, Elizabeth K.

    1994-09-01

    An image analysis method of measuring the effectiveness of a toothbrush in reaching the interproximal spaces of teeth is described. Artificial teeth are coated with a stain that approximates real plaque and then brushed with a toothbrush on a brushing machine. The teeth are then removed and turned sideways so that the interproximal surfaces can be imaged. The areas of stain that have been removed within masked regions that define the interproximal regions are measured and reported. These areas correspond to the interproximal areas of the tooth reached by the toothbrush bristles. The image analysis method produces more precise results (10-fold decrease in standard deviation) in a fraction (22%) of the time as compared to our prior visual grading method.

  3. Piecewise flat embeddings for hyperspectral image analysis

    NASA Astrophysics Data System (ADS)

    Hayes, Tyler L.; Meinhold, Renee T.; Hamilton, John F.; Cahill, Nathan D.

    2017-05-01

    Graph-based dimensionality reduction techniques such as Laplacian Eigenmaps (LE), Local Linear Embedding (LLE), Isometric Feature Mapping (ISOMAP), and Kernel Principal Components Analysis (KPCA) have been used in a variety of hyperspectral image analysis applications for generating smooth data embeddings. Recently, Piecewise Flat Embeddings (PFE) were introduced in the computer vision community as a technique for generating piecewise constant embeddings that make data clustering / image segmentation a straightforward process. In this paper, we show how PFE arises by modifying LE, yielding a constrained ℓ1-minimization problem that can be solved iteratively. Using publicly available data, we carry out experiments to illustrate the implications of applying PFE to pixel-based hyperspectral image clustering and classification.

  4. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  5. Digital image analysis of haematopoietic clusters.

    PubMed

    Benzinou, A; Hojeij, Y; Roudot, A-C

    2005-02-01

    Counting and differentiating cell clusters is a tedious task when performed with a light microscope. Moreover, biased counts and interpretation are difficult to avoid because of the difficulties to evaluate the limits between different types of clusters. Presented here, is a computer-based application able to solve these problems. The image analysis system is entirely automatic, from the stage screening, to the statistical analysis of the results of each experimental plate. Good correlations are found with measurements made by a specialised technician.

  6. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  7. Visualization of parameter space for image analysis.

    PubMed

    Pretorius, A Johannes; Bray, Mark-Anthony P; Carpenter, Anne E; Ruddle, Roy A

    2011-12-01

    Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step--initialization of sampling--and the last step--visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler--a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach.

  8. Visualization of Parameter Space for Image Analysis

    PubMed Central

    Pretorius, A. Johannes; Bray, Mark-Anthony P.; Carpenter, Anne E.; Ruddle, Roy A.

    2013-01-01

    Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step - initialization of sampling - and the last step - visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler - a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach. PMID:22034361

  9. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  10. Using Image Analysis to Build Reading Comprehension

    ERIC Educational Resources Information Center

    Brown, Sarah Drake; Swope, John

    2010-01-01

    Content area reading remains a primary concern of history educators. In order to better prepare students for encounters with text, the authors propose the use of two image analysis strategies tied with a historical theme to heighten student interest in historical content and provide a basis for improved reading comprehension.

  11. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  12. Good relationships between computational image analysis and radiological physics

    SciTech Connect

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-30

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  13. Good relationships between computational image analysis and radiological physics

    NASA Astrophysics Data System (ADS)

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-01

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  14. Automated retinal image analysis over the internet.

    PubMed

    Tsai, Chia-Ling; Madore, Benjamin; Leotta, Matthew J; Sofka, Michal; Yang, Gehua; Majerovics, Anna; Tanenbaum, Howard L; Stewart, Charles V; Roysam, Badrinath

    2008-07-01

    Retinal clinicians and researchers make extensive use of images, and the current emphasis is on digital imaging of the retinal fundus. The goal of this paper is to introduce a system, known as retinal image vessel extraction and registration system, which provides the community of retinal clinicians, researchers, and study directors an integrated suite of advanced digital retinal image analysis tools over the Internet. The capabilities include vasculature tracing and morphometry, joint (simultaneous) montaging of multiple retinal fields, cross-modality registration (color/red-free fundus photographs and fluorescein angiograms), and generation of flicker animations for visualization of changes from longitudinal image sequences. Each capability has been carefully validated in our previous research work. The integrated Internet-based system can enable significant advances in retina-related clinical diagnosis, visualization of the complete fundus at full resolution from multiple low-angle views, analysis of longitudinal changes, research on the retinal vasculature, and objective, quantitative computer-assisted scoring of clinical trials imagery. It could pave the way for future screening services from optometry facilities.

  15. Digital imaging analysis to assess scar phenotype.

    PubMed

    Smith, Brian J; Nidey, Nichole; Miller, Steven F; Moreno Uribe, Lina M; Baum, Christian L; Hamilton, Grant S; Wehby, George L; Dunnwald, Martine

    2014-01-01

    In order to understand the link between the genetic background of patients and wound clinical outcomes, it is critical to have a reliable method to assess the phenotypic characteristics of healed wounds. In this study, we present a novel imaging method that provides reproducible, sensitive, and unbiased assessments of postsurgical scarring. We used this approach to investigate the possibility that genetic variants in orofacial clefting genes are associated with suboptimal healing. Red-green-blue digital images of postsurgical scars of 68 patients, following unilateral cleft lip repair, were captured using the 3dMD imaging system. Morphometric and colorimetric data of repaired regions of the philtrum and upper lip were acquired using ImageJ software, and the unaffected contralateral regions were used as patient-specific controls. Repeatability of the method was high with intraclass correlation coefficient score > 0.8. This method detected a very significant difference in all three colors, and for all patients, between the scarred and the contralateral unaffected philtrum (p ranging from 1.20(-05) to 1.95(-14) ). Physicians' clinical outcome ratings from the same images showed high interobserver variability (overall Pearson coefficient = 0.49) as well as low correlation with digital image analysis results. Finally, we identified genetic variants in TGFB3 and ARHGAP29 associated with suboptimal healing outcome.

  16. Digital imaging analysis to assess scar phenotype

    PubMed Central

    Smith, Brian J.; Nidey, Nichole; Miller, Steven F.; Moreno, Lina M.; Baum, Christian L.; Hamilton, Grant S.; Wehby, George L.; Dunnwald, Martine

    2015-01-01

    In order to understand the link between the genetic background of patients and wound clinical outcomes, it is critical to have a reliable method to assess the phenotypic characteristics of healed wounds. In this study, we present a novel imaging method that provides reproducible, sensitive and unbiased assessments of post-surgical scarring. We used this approach to investigate the possibility that genetic variants in orofacial clefting genes are associated with suboptimal healing. Red-green-blue (RGB) digital images of post-surgical scars of 68 patients, following unilateral cleft lip repair, were captured using the 3dMD image system. Morphometric and colorimetric data of repaired regions of the philtrum and upper lip were acquired using ImageJ software and the unaffected contralateral regions were used as patient-specific controls. Repeatability of the method was high with interclass correlation coefficient score > 0.8. This method detected a very significant difference in all three colors, and for all patients, between the scarred and the contralateral unaffected philtrum (P ranging from 1.20−05 to 1.95−14). Physicians’ clinical outcome ratings from the same images showed high inter-observer variability (overall Pearson coefficient = 0.49) as well as low correlation with digital image analysis results. Finally, we identified genetic variants in TGFB3 and ARHGAP29 associated with suboptimal healing outcome. PMID:24635173

  17. ALISA: adaptive learning image and signal analysis

    NASA Astrophysics Data System (ADS)

    Bock, Peter

    1999-01-01

    ALISA (Adaptive Learning Image and Signal Analysis) is an adaptive statistical learning engine that may be used to detect and classify the surfaces and boundaries of objects in images. The engine has been designed, implemented, and tested at both the George Washington University and the Research Institute for Applied Knowledge Processing in Ulm, Germany over the last nine years with major funding from Robert Bosch GmbH and Lockheed-Martin Corporation. The design of ALISA was inspired by the multi-path cortical- column architecture and adaptive functions of the mammalian visual cortex.

  18. Analysis of spatial pseudodepolarizers in imaging systems

    NASA Technical Reports Server (NTRS)

    Mcguire, James P., Jr.; Chipman, Russell A.

    1990-01-01

    The objective of a number of optical instruments is to measure the intensity accurately without bias as to the incident polarization state. One method to overcome polarization bias in optical systems is the insertion of a spatial pseudodepolarizer. Both the degree of depolarization and image degradation (from the polarization aberrations of the pseudodepolarizer) are analyzed for two depolarizer designs: (1) the Cornu pseudodepolarizer, effective for linearly polarized light, and (2) the dual Babinet compensator pseudodepolarizer, effective for all incident polarization states. The image analysis uses a matrix formalism to describe the polarization dependence of the diffraction patterns and optical transfer function.

  19. Characterization of microrod arrays by image analysis

    NASA Astrophysics Data System (ADS)

    Hillebrand, Reinald; Grimm, Silko; Giesa, Reiner; Schmidt, Hans-Werner; Mathwig, Klaus; Gösele, Ulrich; Steinhart, Martin

    2009-04-01

    The uniformity of the properties of array elements was evaluated by statistical analysis of microscopic images of array structures, assuming that the brightness of the array elements correlates quantitatively or qualitatively with a microscopically probed quantity. Derivatives and autocorrelation functions of cumulative frequency distributions of the object brightnesses were used to quantify variations in object properties throughout arrays. Thus, different specimens, the same specimen at different stages of its fabrication or use, and different imaging conditions can be compared systematically. As an example, we analyzed scanning electron micrographs of microrod arrays and calculated the percentage of broken microrods.

  20. Recent Advances in Morphological Cell Image Analysis

    PubMed Central

    Chen, Shengyong; Zhao, Mingzhu; Wu, Guang; Yao, Chunyan; Zhang, Jianwei

    2012-01-01

    This paper summarizes the recent advances in image processing methods for morphological cell analysis. The topic of morphological analysis has received much attention with the increasing demands in both bioinformatics and biomedical applications. Among many factors that affect the diagnosis of a disease, morphological cell analysis and statistics have made great contributions to results and effects for a doctor. Morphological cell analysis finds the cellar shape, cellar regularity, classification, statistics, diagnosis, and so forth. In the last 20 years, about 1000 publications have reported the use of morphological cell analysis in biomedical research. Relevant solutions encompass a rather wide application area, such as cell clumps segmentation, morphological characteristics extraction, 3D reconstruction, abnormal cells identification, and statistical analysis. These reports are summarized in this paper to enable easy referral to suitable methods for practical solutions. Representative contributions and future research trends are also addressed. PMID:22272215

  1. Autonomous Image Analysis for Future Mars Missions

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Morris, R. L.; Ruzon, M. A.; Bandari, E.; Roush, T. L.

    1999-01-01

    To explore high priority landing sites and to prepare for eventual human exploration, future Mars missions will involve rovers capable of traversing tens of kilometers. However, the current process by which scientists interact with a rover does not scale to such distances. Specifically, numerous command cycles are required to complete even simple tasks, such as, pointing the spectrometer at a variety of nearby rocks. In addition, the time required by scientists to interpret image data before new commands can be given and the limited amount of data that can be downlinked during a given command cycle constrain rover mobility and achievement of science goals. Experience with rover tests on Earth supports these concerns. As a result, traverses to science sites as identified in orbital images would require numerous science command cycles over a period of many weeks, months or even years, perhaps exceeding rover design life and other constraints. Autonomous onboard science analysis can address these problems in two ways. First, it will allow the rover to preferentially transmit "interesting" images, defined as those likely to have higher science content. Second, the rover will be able to anticipate future commands. For example, a rover might autonomously acquire and return spectra of "interesting" rocks along with a high-resolution image of those rocks in addition to returning the context images in which they were detected. Such approaches, coupled with appropriate navigational software, help to address both the data volume and command cycle bottlenecks that limit both rover mobility and science yield. We are developing fast, autonomous algorithms to enable such intelligent on-board decision making by spacecraft. Autonomous algorithms developed to date have the ability to identify rocks and layers in a scene, locate the horizon, and compress multi-spectral image data. We are currently investigating the possibility of reconstructing a 3D surface from a sequence of images

  2. Fast image analysis in polarization SHG microscopy.

    PubMed

    Amat-Roldan, Ivan; Psilodimitrakopoulos, Sotiris; Loza-Alvarez, Pablo; Artigas, David

    2010-08-02

    Pixel resolution polarization-sensitive second harmonic generation (PSHG) imaging has been recently shown as a promising imaging modality, by largely enhancing the capabilities of conventional intensity-based SHG microscopy. PSHG is able to obtain structural information from the elementary SHG active structures, which play an important role in many biological processes. Although the technique is of major interest, acquiring such information requires long offline processing, even with current computers. In this paper, we present an approach based on Fourier analysis of the anisotropy signature that allows processing the PSHG images in less than a second in standard single core computers. This represents a temporal improvement of several orders of magnitude compared to conventional fitting algorithms. This opens up the possibility for fast PSHG information with the subsequent benefit of potential use in medical applications.

  3. Automated quantitative image analysis of nanoparticle assembly

    NASA Astrophysics Data System (ADS)

    Murthy, Chaitanya R.; Gao, Bo; Tao, Andrea R.; Arya, Gaurav

    2015-05-01

    The ability to characterize higher-order structures formed by nanoparticle (NP) assembly is critical for predicting and engineering the properties of advanced nanocomposite materials. Here we develop a quantitative image analysis software to characterize key structural properties of NP clusters from experimental images of nanocomposites. This analysis can be carried out on images captured at intermittent times during assembly to monitor the time evolution of NP clusters in a highly automated manner. The software outputs averages and distributions in the size, radius of gyration, fractal dimension, backbone length, end-to-end distance, anisotropic ratio, and aspect ratio of NP clusters as a function of time along with bootstrapped error bounds for all calculated properties. The polydispersity in the NP building blocks and biases in the sampling of NP clusters are accounted for through the use of probabilistic weights. This software, named Particle Image Characterization Tool (PICT), has been made publicly available and could be an invaluable resource for researchers studying NP assembly. To demonstrate its practical utility, we used PICT to analyze scanning electron microscopy images taken during the assembly of surface-functionalized metal NPs of differing shapes and sizes within a polymer matrix. PICT is used to characterize and analyze the morphology of NP clusters, providing quantitative information that can be used to elucidate the physical mechanisms governing NP assembly.The ability to characterize higher-order structures formed by nanoparticle (NP) assembly is critical for predicting and engineering the properties of advanced nanocomposite materials. Here we develop a quantitative image analysis software to characterize key structural properties of NP clusters from experimental images of nanocomposites. This analysis can be carried out on images captured at intermittent times during assembly to monitor the time evolution of NP clusters in a highly automated

  4. Endoscopic image analysis in semantic space.

    PubMed

    Kwitt, R; Vasconcelos, N; Rasiwasia, N; Uhl, A; Davis, B; Häfner, M; Wrba, F

    2012-10-01

    A novel approach to the design of a semantic, low-dimensional, encoding for endoscopic imagery is proposed. This encoding is based on recent advances in scene recognition, where semantic modeling of image content has gained considerable attention over the last decade. While the semantics of scenes are mainly comprised of environmental concepts such as vegetation, mountains or sky, the semantics of endoscopic imagery are medically relevant visual elements, such as polyps, special surface patterns, or vascular structures. The proposed semantic encoding differs from the representations commonly used in endoscopic image analysis (for medical decision support) in that it establishes a semantic space, where each coordinate axis has a clear human interpretation. It is also shown to establish a connection to Riemannian geometry, which enables principled solutions to a number of problems that arise in both physician training and clinical practice. This connection is exploited by leveraging results from information geometry to solve problems such as (1) recognition of important semantic concepts, (2) semantically-focused image browsing, and (3) estimation of the average-case semantic encoding for a collection of images that share a medically relevant visual detail. The approach can provide physicians with an easily interpretable, semantic encoding of visual content, upon which further decisions, or operations, can be naturally carried out. This is contrary to the prevalent practice in endoscopic image analysis for medical decision support, where image content is primarily captured by discriminative, high-dimensional, appearance features, which possess discriminative power but lack human interpretability. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  6. Image analysis for measuring rod network properties

    NASA Astrophysics Data System (ADS)

    Kim, Dongjae; Choi, Jungkyu; Nam, Jaewook

    2015-12-01

    In recent years, metallic nanowires have been attracting significant attention as next-generation flexible transparent conductive films. The performance of films depends on the network structure created by nanowires. Gaining an understanding of their structure, such as connectivity, coverage, and alignment of nanowires, requires the knowledge of individual nanowires inside the microscopic images taken from the film. Although nanowires are flexible up to a certain extent, they are usually depicted as rigid rods in many analysis and computational studies. Herein, we propose a simple and straightforward algorithm based on the filtering in the frequency domain for detecting the rod-shape objects inside binary images. The proposed algorithm uses a specially designed filter in the frequency domain to detect image segments, namely, the connected components aligned in a certain direction. Those components are post-processed to be combined under a given merging rule in a single rod object. In this study, the microscopic properties of the rod networks relevant to the analysis of nanowire networks were measured for investigating the opto-electric performance of transparent conductive films and their alignment distribution, length distribution, and area fraction. To verify and find the optimum parameters for the proposed algorithm, numerical experiments were performed on synthetic images with predefined properties. By selecting proper parameters, the algorithm was used to investigate silver nanowire transparent conductive films fabricated by the dip coating method.

  7. Evidential Reasoning in Expert Systems for Image Analysis.

    DTIC Science & Technology

    1985-02-01

    techniques to image analysis (IA). There is growing evidence that these techniques offer significant improvements in image analysis , particularly in the...2) to provide a common framework for analysis, (3) to structure the ER process for major expert-system tasks in image analysis , and (4) to identify...approaches to three important tasks for expert systems in the domain of image analysis . This segment concluded with an assessment of the strengths

  8. Pain related inflammation analysis using infrared images

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  9. Vibration signature analysis of AFM images

    SciTech Connect

    Joshi, G.A.; Fu, J.; Pandit, S.M.

    1995-12-31

    Vibration signature analysis has been commonly used for the machine condition monitoring and the control of errors. However, it has been rarely employed for the analysis of the precision instruments such as an atomic force microscope (AFM). In this work, an AFM was used to collect vibration data from a sample positioning stage under different suspension and support conditions. Certain structural characteristics of the sample positioning stage show up as a result of the vibration signature analysis of the surface height images measured using an AFM. It is important to understand these vibration characteristics in order to reduce vibrational uncertainty, improve the damping and structural design, and to eliminate the imaging imperfections. The choice of method applied for vibration analysis may affect the results. Two methods, the data dependent systems (DDS) analysis and the Welch`s periodogram averaging method were investigated for application to this problem. Both techniques provide smooth spectrum plots from the data. Welch`s periodogram provides a coarse resolution as limited by the number of samples and requires a choice of window to be decided subjectively by the user. The DDS analysis provides sharper spectral peaks at a much higher resolution and a much lower noise floor. A decomposition of the signal variance in terms of the frequencies is provided as well. The technique is based on an objective model adequacy criterion.

  10. Principle component analysis based hyperspectral image fusion in imaging spectropolarimeter

    NASA Astrophysics Data System (ADS)

    Ren, Wenyi; Wu, Dan; Jiang, Jiangang; Yang, Guoan; Zhang, Chunmin

    2017-02-01

    Image fusion is of great importance in object detection. A PCA based image fusion method was proposed. A pixel-level average method and a wavelet-based methods have been implemented for a comparison study. Different performance metrics without reference image are implemented to evaluate the performance of image fusion algorithms. It has been concluded that image fusion using PCA based method showed better performance.

  11. BioImage Suite: An integrated medical image analysis suite: An update.

    PubMed

    Papademetris, Xenophon; Jackowski, Marcel P; Rajeevan, Nallakkandi; DiStasio, Marcello; Okuda, Hirohito; Constable, R Todd; Staib, Lawrence H

    2006-01-01

    BioImage Suite is an NIH-supported medical image analysis software suite developed at Yale. It leverages both the Visualization Toolkit (VTK) and the Insight Toolkit (ITK) and it includes many additional algorithms for image analysis especially in the areas of segmentation, registration, diffusion weighted image processing and fMRI analysis. BioImage Suite has a user-friendly user interface developed in the Tcl scripting language. A final beta version is freely available for download.

  12. Computerized image analysis of digitized infrared images of breasts from a scanning infrared imaging system

    NASA Astrophysics Data System (ADS)

    Head, Jonathan F.; Lipari, Charles A.; Elliot, Robert L.

    1998-10-01

    Infrared imaging of the breasts has been shown to be of value in risk assessment, detection, diagnosis and prognosis of breast cancer. However, infrared imaging has not been widely accepted for a variety of reasons, including the lack of standardization of the subjective visual analysis method. The subjective nature of the standard visual analysis makes it difficult to achieve equivalent results with different equipment and different interpreters of the infrared patterns of the breasts. Therefore, this study was undertaken to develop more objective analysis methods for infrared images of the breasts by creating objective semiquantitative and quantitative analysis of computer assisted image analysis determined mean temperatures of whole breasts and quadrants of the breasts. When using objective quantitative data on whole breasts (comparing differences in means of left and right breasts), semiquantitative data on quadrants of the breast (determining an index by summation of scores for each quadrant), or summation of quantitative data on quadrants of the breasts there was a decrease in the number of abnormal patterns (positives) in patients being screen for breast cancer and an increases in the number of abnormal patterns (true positives) in the breast cancer patients. It is hoped that the decrease in positives in women being screened for breast cancer will translate into a decrease in the false positives but larger numbers of women with longer follow-up will be needed to clarify this. Also a much larger group of breast cancer patients will need to be studied in order to see if there is a true increase in the percentage of breast cancer patients presenting with abnormal infrared images of the breast with these objective image analysis methods.

  13. Machine learning for medical images analysis.

    PubMed

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods.

  14. Global Methods for Image Motion Analysis

    DTIC Science & Technology

    1992-10-01

    including the time for reviewing instructions , searching existing data sources, gathering and maintaining the data needed, and completing and reviewing...thanks go to Pankaj who inspired me in research , to Prasad from whom I have learned so much, and to Ronie and Laureen, the memories of whose company...of images to determine egomotion and to extract information from the scene. Research in motion analysis has been focussed on the problems of

  15. Image Correlation: Part 1. Simulation and Analysis

    DTIC Science & Technology

    1976-11-01

    prepared for UNITED STATES AIR FORCE PROJECT RAND D D Or,• Illnel lSANT Dr-- CA. 90 ft M A R . . . . . -- a .02 .0. The research described In this...Analysis, Deputy Chief of Staff, Research and Development, Hq USAF. Reports of The Rand Corporation do not necessarily reflect the opinions or policies of...the sponsors of Rand research , 4. . . , * R-2057/1-PR Novem-ber 1976 Image Correlation: Part I Simulation and Analysis H. H. Bailey, F. W. Blackwell

  16. Tomographic spectral imaging: analysis of localized corrosion.

    SciTech Connect

    Michael, Joseph Richard; Kotula, Paul Gabriel; Keenan, Michael Robert

    2005-02-01

    Microanalysis is typically performed to analyze the near surface of materials. There are many instances where chemical information about the third spatial dimension is essential to the solution of materials analyses. The majority of 3D analyses however focus on limited spectral acquisition and/or analysis. For truly comprehensive 3D chemical characterization, 4D spectral images (a complete spectrum from each volume element of a region of a specimen) are needed. Furthermore, a robust statistical method is needed to extract the maximum amount of chemical information from that extremely large amount of data. In this paper, an example of the acquisition and multivariate statistical analysis of 4D (3-spatial and 1-spectral dimension) x-ray spectral images is described. The method of utilizing a single- or dual-beam FIB (w/o or w/SEM) to get at 3D chemistry has been described by others with respect to secondary-ion mass spectrometry. The basic methodology described in those works has been modified for comprehensive x-ray microanalysis in a dual-beam FIB/SEM (FEI Co. DB-235). In brief, the FIB is used to serially section a site-specific region of a sample and then the electron beam is rastered over the exposed surfaces with x-ray spectral images being acquired at each section. All this is performed without rotating or tilting the specimen between FIB cutting and SEM imaging/x-ray spectral image acquisition. The resultant 4D spectral image is then unfolded (number of volume elements by number of channels) and subjected to the same multivariate curve resolution (MCR) approach that has proven successful for the analysis of lower-dimension x-ray spectral images. The TSI data sets can be in excess of 4Gbytes. This problem has been overcome (for now) and images up to 6Gbytes have been analyzed in this work. The method for analyzing such large spectral images will be described in this presentation. A comprehensive 3D chemical analysis was performed on several corrosion specimens

  17. Image analysis from root system pictures

    NASA Astrophysics Data System (ADS)

    Casaroli, D.; Jong van Lier, Q.; Metselaar, K.

    2009-04-01

    Root research has been hampered by a lack of good methods and by the amount of time involved in making measurements. In general the studies from root system are made with either monolith or minirhizotron method which is used as a quantitative tool but requires comparison with conventional destructive methods. This work aimed to analyze roots systems images, obtained from a root atlas book, to different crops in order to find the root length and root length density and correlate them with the literature. Five crops images from Zea mays, Secale cereale, Triticum aestivum, Medicago sativa and Panicum miliaceum were divided in horizontal and vertical layers. Root length distribution was analyzed for horizontal as well as vertical layers. In order to obtain the root length density, a cuboidal volume was supposed to correspond to each part of the image. The results from regression analyses showed root length distributions according to horizontal or vertical layers. It was possible to find the root length distribution for single horizontal layers as a function of vertical layers, and also for single vertical layers as a function of horizontal layers. Regression analysis showed good fits when the root length distributions were grouped in horizontal layers according to the distance from the root center. When root length distributions were grouped according to soil horizons the fits worsened. The resulting root length density estimates were lower than those commonly found in literature, possibly due to (1) the fact that the crop images resulted from single plant situations, while the analyzed field experiments had more than one plant; (2) root overlapping may occur in the field; (3) root experiments, both in the field and image analyses as performed here, are subject to sampling errors; (4) the (hand drawn) images used in this study may have omitted some of the smallest roots.

  18. Image analysis applied to luminescence microscopy

    NASA Astrophysics Data System (ADS)

    Maire, Eric; Lelievre-Berna, Eddy; Fafeur, Veronique; Vandenbunder, Bernard

    1998-04-01

    We have developed a novel approach to study luminescent light emission during migration of living cells by low-light imaging techniques. The equipment consists in an anti-vibration table with a hole for a direct output under the frame of an inverted microscope. The image is directly captured by an ultra low- light level photon-counting camera equipped with an image intensifier coupled by an optical fiber to a CCD sensor. This installation is dedicated to measure in a dynamic manner the effect of SF/HGF (Scatter Factor/Hepatocyte Growth Factor) both on activation of gene promoter elements and on cell motility. Epithelial cells were stably transfected with promoter elements containing Ets transcription factor-binding sites driving a luciferase reporter gene. Luminescent light emitted by individual cells was measured by image analysis. Images of luminescent spots were acquired with a high aperture objective and time exposure of 10 - 30 min in photon-counting mode. The sensitivity of the camera was adjusted to a high value which required the use of a segmentation algorithm dedicated to eliminate the background noise. Hence, image segmentation and treatments by mathematical morphology were particularly indicated in these experimental conditions. In order to estimate the orientation of cells during their migration, we used a dedicated skeleton algorithm applied to the oblong spots of variable intensities emitted by the cells. Kinetic changes of luminescent sources, distance and speed of migration were recorded and then correlated with cellular morphological changes for each spot. Our results highlight the usefulness of the mathematical morphology to quantify kinetic changes in luminescence microscopy.

  19. Automatic dirt trail analysis in dermoscopy images.

    PubMed

    Cheng, Beibei; Joe Stanley, R; Stoecker, William V; Osterwise, Christopher T P; Stricklin, Sherea M; Hinton, Kristen A; Moss, Randy H; Oliviero, Margaret; Rabinovitz, Harold S

    2013-02-01

    Basal cell carcinoma (BCC) is the most common cancer in the US. Dermatoscopes are devices used by physicians to facilitate the early detection of these cancers based on the identification of skin lesion structures often specific to BCCs. One new lesion structure, referred to as dirt trails, has the appearance of dark gray, brown or black dots and clods of varying sizes distributed in elongated clusters with indistinct borders, often appearing as curvilinear trails. In this research, we explore a dirt trail detection and analysis algorithm for extracting, measuring, and characterizing dirt trails based on size, distribution, and color in dermoscopic skin lesion images. These dirt trails are then used to automatically discriminate BCC from benign skin lesions. For an experimental data set of 35 BCC images with dirt trails and 79 benign lesion images, a neural network-based classifier achieved a 0.902 are under a receiver operating characteristic curve using a leave-one-out approach. Results obtained from this study show that automatic detection of dirt trails in dermoscopic images of BCC is feasible. This is important because of the large number of these skin cancers seen every year and the challenge of discovering these earlier with instrumentation. © 2011 John Wiley & Sons A/S.

  20. Hyperspectral imaging technology for pharmaceutical analysis

    NASA Astrophysics Data System (ADS)

    Hamilton, Sara J.; Lodder, Robert A.

    2002-06-01

    The sensitivity and spatial resolution of hyperspectral imaging instruments are tested in this paper using pharmaceutical applications. The first experiment tested the hypothesis that a near-IR tunable diode-based remote sensing system is capable of monitoring degradation of hard gelatin capsules at a relatively long distance. Spectra from the capsules were used to differentiate among capsules exposed to an atmosphere containing imaging spectrometry of tablets permits the identification and composition of multiple individual tables to be determined simultaneously. A near-IR camera was used to collect thousands of spectra simultaneously from a field of blister-packaged tablets. The number of tablets that a typical near-IR camera can currently analyze simultaneously form a field of blister- packaged tablets. The number of tablets that a typical near- IR camera can currently analyze simultaneously was estimated to be approximately 1300. The bootstrap error-adjusted single-sample technique chemometric-imaging algorithm was used to draw probability-density contour plots that revealed tablet composition. The single-capsule analysis provides an indication of how far apart the sample and instrumentation can be and still maintain adequate S/N, while the multiple- sample imaging experiment gives an indication of how many samples can be analyzed simultaneously while maintaining an adequate S/N and pixel coverage on each sample.

  1. Image analysis of Renaissance copperplate prints

    NASA Astrophysics Data System (ADS)

    Hedges, S. Blair

    2008-02-01

    From the fifteenth to the nineteenth centuries, prints were a common form of visual communication, analogous to photographs. Copperplate prints have many finely engraved black lines which were used to create the illusion of continuous tone. Line densities generally are 100-2000 lines per square centimeter and a print can contain more than a million total engraved lines 20-300 micrometers in width. Because hundreds to thousands of prints were made from a single copperplate over decades, variation among prints can have historical value. The largest variation is plate-related, which is the thinning of lines over successive editions as a result of plate polishing to remove time-accumulated corrosion. Thinning can be quantified with image analysis and used to date undated prints and books containing prints. Print-related variation, such as over-inking of the print, is a smaller but significant source. Image-related variation can introduce bias if images were differentially illuminated or not in focus, but improved imaging technology can limit this variation. The Print Index, the percentage of an area composed of lines, is proposed as a primary measure of variation. Statistical methods also are proposed for comparing and identifying prints in the context of a print database.

  2. Multispectral laser imaging for advanced food analysis

    NASA Astrophysics Data System (ADS)

    Senni, L.; Burrascano, P.; Ricci, M.

    2016-07-01

    A hardware-software apparatus for food inspection capable of realizing multispectral NIR laser imaging at four different wavelengths is herein discussed. The system was designed to operate in a through-transmission configuration to detect the presence of unwanted foreign bodies inside samples, whether packed or unpacked. A modified Lock-In technique was employed to counterbalance the significant signal intensity attenuation due to transmission across the sample and to extract the multispectral information more efficiently. The NIR laser wavelengths used to acquire the multispectral images can be varied to deal with different materials and to focus on specific aspects. In the present work the wavelengths were selected after a preliminary analysis to enhance the image contrast between foreign bodies and food in the sample, thus identifying the location and nature of the defects. Experimental results obtained from several specimens, with and without packaging, are presented and the multispectral image processing as well as the achievable spatial resolution of the system are discussed.

  3. Quantitative color analysis for capillaroscopy image segmentation.

    PubMed

    Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Amorosi, Beatrice; D'Alessio, Tommaso; Palma, Claudio

    2012-06-01

    This communication introduces a novel approach for quantitatively evaluating the role of color space decomposition in digital nailfold capillaroscopy analysis. It is clinically recognized that any alterations of the capillary pattern, at the periungual skin region, are directly related to dermatologic and rheumatic diseases. The proposed algorithm for the segmentation of digital capillaroscopy images is optimized with respect to the choice of the color space and the contrast variation. Since the color space is a critical factor for segmenting low-contrast images, an exhaustive comparison between different color channels is conducted and a novel color channel combination is presented. Results from images of 15 healthy subjects are compared with annotated data, i.e. selected images approved by clinicians. By comparison, a set of figures of merit, which highlights the algorithm capability to correctly segment capillaries, their shape and their number, is extracted. Experimental tests depict that the optimized procedure for capillaries segmentation, based on a novel color channel combination, presents values of average accuracy higher than 0.8, and extracts capillaries whose shape and granularity are acceptable. The obtained results are particularly encouraging for future developments on the classification of capillary patterns with respect to dermatologic and rheumatic diseases.

  4. Simple Low Level Features for Image Analysis

    NASA Astrophysics Data System (ADS)

    Falcoz, Paolo

    As human beings, we perceive the world around us mainly through our eyes, and give what we see the status of “reality”; as such we historically tried to create ways of recording this reality so we could augment or extend our memory. From early attempts in photography like the image produced in 1826 by the French inventor Nicéphore Niépce (Figure 2.1) to the latest high definition camcorders, the number of recorded pieces of reality increased exponentially, posing the problem of managing all that information. Most of the raw video material produced today has lost its memory augmentation function, as it will hardly ever be viewed by any human; pervasive CCTVs are an example. They generate an enormous amount of data each day, but there is not enough “human processing power” to view them. Therefore the need for effective automatic image analysis tools is great, and a lot effort has been put in it, both from the academia and the industry. In this chapter, a review of some of the most important image analysis tools are presented.

  5. Nursing image: an evolutionary concept analysis.

    PubMed

    Rezaei-Adaryani, Morteza; Salsali, Mahvash; Mohammadi, Eesa

    2012-12-01

    A long-term challenge to the nursing profession is the concept of image. In this study, we used the Rodgers' evolutionary concept analysis approach to analyze the concept of nursing image (NI). The aim of this concept analysis was to clarify the attributes, antecedents, consequences, and implications associated with the concept. We performed an integrative internet-based literature review to retrieve English literature published from 1980-2011. Findings showed that NI is a multidimensional, all-inclusive, paradoxical, dynamic, and complex concept. The media, invisibility, clothing style, nurses' behaviors, gender issues, and professional organizations are the most important antecedents of the concept. We found that NI is pivotal in staff recruitment and nursing shortage, resource allocation to nursing, nurses' job performance, workload, burnout and job dissatisfaction, violence against nurses, public trust, and salaries available to nurses. An in-depth understanding of the NI concept would assist nurses to eliminate negative stereotypes and build a more professional image for the nurse and the profession.

  6. Intra voxel analysis in magnetic resonance imaging.

    PubMed

    Ambrosanio, Michele; Baselice, Fabio; Ferraioli, Giampaolo; Lenti, Flavia; Pascazio, Vito

    2017-04-01

    A technique for analyzing the composition of each voxel, in the magnetic resonance imaging (MRI) framework, is presented. By combining different acquisitions, a novel methodology, called intra voxel analysis (IVA), for the detection of multiple tissues and the estimation of their spin-spin relaxation times is proposed. The methodology exploits the sparse Bayesian learning (SBL) approach in order to solve a highly underdetermined problem imposing the solution sparsity. IVA, developed for spin echo imaging sequence, can be easily extended to any acquisition scheme. For validating the approach, simulated and real data sets are considered. Monte Carlo simulations have been implemented for evaluating the performances of IVA compared to methods existing in literature. Two clinical datasets acquired with a 3T scanner have been considered for validating the approach. With respect to other approaches presented in literature, IVA has proved to be more effective in the voxel composition analysis, in particular in the case of few acquired images. Results are interesting and very promising: IVA is expected to have a remarkable impact on the research community and on the diagnostic field. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Markov Random Fields, Stochastic Quantization and Image Analysis

    DTIC Science & Technology

    1990-01-01

    Markov random fields based on the lattice Z2 have been extensively used in image analysis in a Bayesian framework as a-priori models for the...of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis , Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory.

  8. Covariance of lucky images: performance analysis

    NASA Astrophysics Data System (ADS)

    Cagigal, Manuel P.; Valle, Pedro J.; Cagigas, Miguel A.; Villó-Pérez, Isidro; Colodro-Conde, Carlos; Ginski, C.; Mugrauer, M.; Seeliger, M.

    2017-01-01

    The covariance of ground-based lucky images is a robust and easy-to-use algorithm that allows us to detect faint companions surrounding a host star. In this paper, we analyse the relevance of the number of processed frames, the frames' quality, the atmosphere conditions and the detection noise on the companion detectability. This analysis has been carried out using both experimental and computer-simulated imaging data. Although the technique allows us the detection of faint companions, the camera detection noise and the use of a limited number of frames reduce the minimum detectable companion intensity to around 1000 times fainter than that of the host star when placed at an angular distance corresponding to the few first Airy rings. The reachable contrast could be even larger when detecting companions with the assistance of an adaptive optics system.

  9. PAMS photo image retrieval prototype alternatives analysis

    SciTech Connect

    Conner, M.L.

    1996-04-30

    Photography and Audiovisual Services uses a system called the Photography and Audiovisual Management System (PAMS) to perform order entry and billing services. The PAMS system utilizes Revelation Technologies database management software, AREV. Work is currently in progress to link the PAMS AREV system to a Microsoft SQL Server database engine to provide photograph indexing and query capabilities. The link between AREV and SQLServer will use a technique called ``bonding.`` This photograph imaging subsystem will interface to the PAMS system and handle the image capture and retrieval portions of the project. The intent of this alternatives analysis is to examine the software and hardware alternatives available to meet the requirements for this project, and identify a cost-effective solution.

  10. Analysis on enhanced depth of field for integral imaging microscope.

    PubMed

    Lim, Young-Tae; Park, Jae-Hyeung; Kwon, Ki-Chul; Kim, Nam

    2012-10-08

    Depth of field of the integral imaging microscope is studied. In the integral imaging microscope, 3-D information is encoded as a form of elemental images Distance between intermediate plane and object point decides the number of elemental image and depth of field of integral imaging microscope. From the analysis, it is found that depth of field of the reconstructed depth plane image by computational integral imaging reconstruction is longer than depth of field of optical microscope. From analyzed relationship, experiment using integral imaging microscopy and conventional microscopy is also performed to confirm enhanced depth of field of integral imaging microscopy.

  11. Machine Learning Interface for Medical Image Analysis.

    PubMed

    Zhang, Yi C; Kagen, Alexander C

    2016-10-11

    TensorFlow is a second-generation open-source machine learning software library with a built-in framework for implementing neural networks in wide variety of perceptual tasks. Although TensorFlow usage is well established with computer vision datasets, the TensorFlow interface with DICOM formats for medical imaging remains to be established. Our goal is to extend the TensorFlow API to accept raw DICOM images as input; 1513 DaTscan DICOM images were obtained from the Parkinson's Progression Markers Initiative (PPMI) database. DICOM pixel intensities were extracted and shaped into tensors, or n-dimensional arrays, to populate the training, validation, and test input datasets for machine learning. A simple neural network was constructed in TensorFlow to classify images into normal or Parkinson's disease groups. Training was executed over 1000 iterations for each cross-validation set. The gradient descent optimization and Adagrad optimization algorithms were used to minimize cross-entropy between the predicted and ground-truth labels. Cross-validation was performed ten times to produce a mean accuracy of 0.938 ± 0.047 (95 % CI 0.908-0.967). The mean sensitivity was 0.974 ± 0.043 (95 % CI 0.947-1.00) and mean specificity was 0.822 ± 0.207 (95 % CI 0.694-0.950). We extended the TensorFlow API to enable DICOM compatibility in the context of DaTscan image analysis. We implemented a neural network classifier that produces diagnostic accuracies on par with excellent results from previous machine learning models. These results indicate the potential role of TensorFlow as a useful adjunct diagnostic tool in the clinical setting.

  12. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  13. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  14. Image analysis software and sample preparation demands

    NASA Astrophysics Data System (ADS)

    Roth, Karl n.; Wenzelides, Knut; Wolf, Guenter; Hufnagl, Peter

    1990-11-01

    Image analysis offers the opportunity to analyse many processes in medicine, biology and engeneering in a quantitative manner. Experience shows that it is only by awareness of preparation methods and attention to software design that full benefit can be reaped from a picture processing system in the fields of cytology and histology. Some examples of special stains for automated analysis are given here and the effectiveness of commercially available software packages is investigated. The application of picture processing and development of related special hardware and software has been increasing within the last years. As PC-based picture processing systems can be purchased at reasonable costs more and more users are confronted with these problems. Experience shows that the quality of commercially available software packages differ and the requirements on the sample preparation needed for successful problem solutions are often underestimated. But as always, sample preparation is still the key to success in automated image analysis for cells and tissues. Hence, a problem solution requires the permanent interaction between sample preparation methods and algorithm development.

  15. Research on automatic human chromosome image analysis

    NASA Astrophysics Data System (ADS)

    Ming, Delie; Tian, Jinwen; Liu, Jian

    2007-11-01

    Human chromosome karyotyping is one of the essential tasks in cytogenetics, especially in genetic syndrome diagnoses. In this thesis, an automatic procedure is introduced for human chromosome image analysis. According to different status of touching and overlapping chromosomes, several segmentation methods are proposed to achieve the best results. Medial axis is extracted by the middle point algorithm. Chromosome band is enhanced by the algorithm based on multiscale B-spline wavelets, extracted by average gray profile, gradient profile and shape profile, and calculated by the WDD (Weighted Density Distribution) descriptors. The multilayer classifier is used in classification. Experiment results demonstrate that the algorithms perform well.

  16. Quantitative Analysis in Nuclear Medicine Imaging

    NASA Astrophysics Data System (ADS)

    Zaidi, Habib

    This book provides a review of image analysis techniques as they are applied in the field of diagnostic and therapeutic nuclear medicine. Driven in part by the remarkable increase in computing power and its ready and inexpensive availability, this is a relatively new yet rapidly expanding field. Likewise, although the use of radionuclides for diagnosis and therapy has origins dating back almost to the discovery of natural radioactivity itself, radionuclide therapy and, in particular, targeted radionuclide therapy has only recently emerged as a promising approach for therapy of cancer and, to a lesser extent, other diseases.

  17. Cell tracking for cell image analysis

    NASA Astrophysics Data System (ADS)

    Bise, Ryoma; Sato, Yoichi

    2017-04-01

    Cell image analysis is important for research and discovery in biology and medicine. In this paper, we present our cell tracking methods, which is capable of obtaining fine-grain cell behavior metrics. In order to address difficulties under dense culture conditions, where cell detection cannot be done reliably since cell often touch with blurry intercellular boundaries, we proposed two methods which are global data association and jointly solving cell detection and association. We also show the effectiveness of the proposed methods by applying the method to the biological researches.

  18. AUTOMATIC DIRT TRAIL ANALYSIS IN DERMOSCOPY IMAGES

    PubMed Central

    Cheng, Beibei; Stanley, R. Joe; Stoecker, William V.; Osterwise, Christopher T.P.; Stricklin, Sherea M.; Hinton, Kristen A.; Moss, Randy H.; Oliviero, Margaret; Rabinovitz, Harold S.

    2011-01-01

    Basal cell carcinoma (BCC) is the most common cancer in the U.S. Dermatoscopes are devices used by physicians to facilitate the early detection of these cancers based on the identification of skin lesion structures often specific to BCCs. One new lesion structure, referred to as dirt trails, has the appearance of dark gray, brown or black dots and clods of varying sizes distributed in elongated clusters with indistinct borders, often appearing as curvilinear trails. In this research, we explore a dirt trail detection and analysis algorithm for extracting, measuring, and characterizing dirt trails based on size, distribution, and color in dermoscopic skin lesion images. These dirt trails are then used to automatically discriminate BCC from benign skin lesions. For an experimental data set of 35 BCC images with dirt trails and 79 benign lesion images, a neural network-based classifier achieved a 0.902 area under a receiver operating characteristic curve using a leave-one-out approach, demonstrating the potential of dirt trails for BCC lesion discrimination. PMID:22233099

  19. Sparse Superpixel Unmixing for Hyperspectral Image Analysis

    NASA Technical Reports Server (NTRS)

    Castano, Rebecca; Thompson, David R.; Gilmore, Martha

    2010-01-01

    Software was developed that automatically detects minerals that are present in each pixel of a hyperspectral image. An algorithm based on sparse spectral unmixing with Bayesian Positive Source Separation is used to produce mineral abundance maps from hyperspectral images. A superpixel segmentation strategy enables efficient unmixing in an interactive session. The algorithm computes statistically likely combinations of constituents based on a set of possible constituent minerals whose abundances are uncertain. A library of source spectra from laboratory experiments or previous remote observations is used. A superpixel segmentation strategy improves analysis time by orders of magnitude, permitting incorporation into an interactive user session (see figure). Mineralogical search strategies can be categorized as supervised or unsupervised. Supervised methods use a detection function, developed on previous data by hand or statistical techniques, to identify one or more specific target signals. Purely unsupervised results are not always physically meaningful, and may ignore subtle or localized mineralogy since they aim to minimize reconstruction error over the entire image. This algorithm offers advantages of both methods, providing meaningful physical interpretations and sensitivity to subtle or unexpected minerals.

  20. Soil Surface Roughness through Image Analysis

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Saa-Requejo, A.; Valencia, J. L.; Moratiel, R.; Paz-Gonzalez, A.; Agro-Environmental Modeling

    2011-12-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on several factors being one of them surface micro-topography, usually quantified trough soil surface roughness (SSR). SSR greatly affects surface sealing and runoff generation, yet little information is available about the effect of roughness on the spatial distribution of runoff and on flow concentration. The methods commonly used to measure SSR involve measuring point elevation using a pin roughness meter or laser, both of which are labor intensive and expensive. Lately a simple and inexpensive technique based on percentage of shadow in soil surface image has been developed to determine SSR in the field in order to obtain measurement for wide spread application. One of the first steps in this technique is image de-noising and thresholding to estimate the percentage of black pixels in the studied area. In this work, a series of soil surface images have been analyzed applying several de-noising wavelet analysis and thresholding algorithms to study the variation in percentage of shadows and the shadows size distribution. Funding provided by Spanish Ministerio de Ciencia e Innovación (MICINN) through project no. AGL2010- 21501/AGR and by Xunta de Galicia through project no INCITE08PXIB1621 are greatly appreciated.

  1. Monotonic correlation analysis of image quality measures for image fusion

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Burks, Stephen D.; Moore, Richard K.; Nguyen, Quang

    2008-04-01

    The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality (IQ) measures to automatically determine the best image fusion algorithm for a particular task. This work introduces a novel monotonic correlation coefficient to investigate how well possible IQ features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by traditional correlation values.

  2. Dynamic and still microcirculatory image analysis for quantitative microcirculation research

    NASA Astrophysics Data System (ADS)

    Ying, Xiaoyou; Xiu, Rui-juan

    1994-05-01

    Based on analyses of various types of digital microcirculatory image (DMCI), we summed up the image features of DMCI, the digitizing demands for digital microcirculatory imaging, and the basic characteristics of the DMCI processing. A dynamic and still imaging separation processing (DSISP) mode was designed for developing a DMCI workstation and the DMCI processing. Original images in this study were clinical microcirculatory images from human finger nail-bed and conjunctiva microvasculature, and intravital microvascular network images from animal tissue or organs. A series of dynamic and still microcirculatory image analysis functions were developed in this study. The experimental results indicate most of the established analog video image analysis methods for microcirculatory measurement could be realized in a more flexible way based on the DMCI. More information can be rapidly extracted from the quality improved DMCI by employing intelligence digital image analysis methods. The DSISP mode is very suitable for building a DMCI workstation.

  3. Correlative feature analysis of FFDM images

    NASA Astrophysics Data System (ADS)

    Yuan, Yading; Giger, Maryellen L.; Li, Hui; Sennett, Charlene

    2008-03-01

    Identifying the corresponding image pair of a lesion is an essential step for combining information from different views of the lesion to improve the diagnostic ability of both radiologists and CAD systems. Because of the non-rigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this study, we present a computerized framework that differentiates the corresponding images from different views of a lesion from non-corresponding ones. A dual-stage segmentation method, which employs an initial radial gradient index(RGI) based segmentation and an active contour model, was initially applied to extract mass lesions from the surrounding tissues. Then various lesion features were automatically extracted from each of the two views of each lesion to quantify the characteristics of margin, shape, size, texture and context of the lesion, as well as its distance to nipple. We employed a two-step method to select an effective subset of features, and combined it with a BANN to obtain a discriminant score, which yielded an estimate of the probability that the two images are of the same physical lesion. ROC analysis was used to evaluate the performance of the individual features and the selected feature subset in the task of distinguishing between corresponding and non-corresponding pairs. By using a FFDM database with 124 corresponding image pairs and 35 non-corresponding pairs, the distance feature yielded an AUC (area under the ROC curve) of 0.8 with leave-one-out evaluation by lesion, and the feature subset, which includes distance feature, lesion size and lesion contrast, yielded an AUC of 0.86. The improvement by using multiple features was statistically significant as compared to single feature performance. (p<0.001)

  4. Wavelet Analysis of Space Solar Telescope Images

    NASA Astrophysics Data System (ADS)

    Zhu, Xi-An; Jin, Sheng-Zhen; Wang, Jing-Yu; Ning, Shu-Nian

    2003-12-01

    The scientific satellite SST (Space Solar Telescope) is an important research project strongly supported by the Chinese Academy of Sciences. Every day, SST acquires 50 GB of data (after processing) but only 10GB can be transmitted to the ground because of limited time of satellite passage and limited channel volume. Therefore, the data must be compressed before transmission. Wavelets analysis is a new technique developed over the last 10 years, with great potential of application. We start with a brief introduction to the essential principles of wavelet analysis, and then describe the main idea of embedded zerotree wavelet coding, used for compressing the SST images. The results show that this coding is adequate for the job.

  5. Nonlinear analysis for image stabilization in IR imaging system

    NASA Astrophysics Data System (ADS)

    Xie, Zhan-lei; Lu, Jin; Luo, Yong-hong; Zhang, Mei-sheng

    2009-07-01

    In order to acquire stabilization image for IR imaging system, an image stabilization system is required. Linear method is often used in current research on the system and a simple PID controller can meet the demands of common users. In fact, image stabilization system is a structure with nonlinear characters such as structural errors, friction and disturbances. In up-grade IR imaging system, although conventional PID controller is optimally designed, it cannot meet the demands of higher accuracy and fast responding speed when disturbances are present. To get high-quality stabilization image, nonlinear characters should be rejected. The friction and gear clearance are key factors and play an important role in the image stabilization system. The friction induces static error of system. When the system runs at low speed, stick-slip and creeping induced by friction not only decrease resolution and repeating accuracy, but also increase the tracking error and the steady state error. The accuracy of the system is also limited by gear clearance, and selfexcited vibration is brought on by serious clearance. In this paper, effects of different nonlinear on image stabilization precision are analyzed, including friction and gear clearance. After analyzing the characters and influence principle of the friction and gear clearance, a friction model is established with MATLAB Simulink toolbox, which is composed of static friction, Coulomb friction and viscous friction, and the gear clearance non-linearity model is built, providing theoretical basis for the future engineering practice.

  6. Percent area coverage through image analysis

    NASA Astrophysics Data System (ADS)

    Wong, Chung M.; Hong, Sung M.; Liu, De-Ling

    2016-09-01

    The notion of percent area coverage (PAC) has been used to characterize surface cleanliness levels in the spacecraft contamination control community. Due to the lack of detailed particle data, PAC has been conventionally calculated by multiplying the particle surface density in predetermined particle size bins by a set of coefficients per MIL-STD-1246C. In deriving the set of coefficients, the surface particle size distribution is assumed to follow a log-normal relation between particle density and particle size, while the cross-sectional area function is given as a combination of regular geometric shapes. For particles with irregular shapes, the cross-sectional area function cannot describe the true particle area and, therefore, may introduce error in the PAC calculation. Other errors may also be introduced by using the lognormal surface particle size distribution function that highly depends on the environmental cleanliness and cleaning process. In this paper, we present PAC measurements from silicon witness wafers that collected fallouts from a fabric material after vibration testing. PAC calculations were performed through analysis of microscope images and compare them to values derived through the MIL-STD-1246C method. Our results showed that the MIL-STD-1246C method does provide a reasonable upper bound to the PAC values determined through image analysis, in particular for PAC values below 0.1.

  7. The Scientific Image in Behavior Analysis.

    PubMed

    Keenan, Mickey

    2016-05-01

    Throughout the history of science, the scientific image has played a significant role in communication. With recent developments in computing technology, there has been an increase in the kinds of opportunities now available for scientists to communicate in more sophisticated ways. Within behavior analysis, though, we are only just beginning to appreciate the importance of going beyond the printing press to elucidate basic principles of behavior. The aim of this manuscript is to stimulate appreciation of both the role of the scientific image and the opportunities provided by a quick response code (QR code) for enhancing the functionality of the printed page. I discuss the limitations of imagery in behavior analysis ("Introduction"), and I show examples of what can be done with animations and multimedia for teaching philosophical issues that arise when teaching about private events ("Private Events 1 and 2"). Animations are also useful for bypassing ethical issues when showing examples of challenging behavior ("Challenging Behavior"). Each of these topics can be accessed only by scanning the QR code provided. This contingency has been arranged to help the reader embrace this new technology. In so doing, I hope to show its potential for going beyond the limitations of the printing press.

  8. An Expert Image Analysis System For Chromosome Analysis Application

    NASA Astrophysics Data System (ADS)

    Wu, Q.; Suetens, P.; Oosterlinck, A.; Van den Berghe, H.

    1987-01-01

    This paper reports a recent study on applying a knowledge-based system approach as a new attempt to solve the problem of chromosome classification. A theoretical framework of an expert image analysis system is proposed, based on such a study. In this scheme, chromosome classification can be carried out under a hypothesize-and-verify paradigm, by integrating a rule-based component, in which the expertise of chromosome karyotyping is formulated with an existing image analysis system which uses conventional pattern recognition techniques. Results from the existing system can be used to bring in hypotheses, and with the rule-based verification and modification procedures, improvement of the classification performance can be excepted.

  9. Application of automatic image analysis in wood science

    Treesearch

    Charles W. McMillin

    1982-01-01

    In this paper I describe an image analysis system and illustrate with examples the application of automatic quantitative measurement to wood science. Automatic image analysis, a powerful and relatively new technology, uses optical, video, electronic, and computer components to rapidly derive information from images with minimal operator interaction. Such instruments...

  10. High speed image correlation for vibration analysis

    NASA Astrophysics Data System (ADS)

    Siebert, T.; Wood, R.; Splitthof, K.

    2009-08-01

    Digital speckle correlation techniques have already been successfully proven to be an accurate displacement analysis tool for a wide range of applications. With the use of two cameras, three dimensional measurements of contours and displacements can be carried out. With a simple setup it opens a wide range of applications. Rapid new developments in the field of digital imaging and computer technology opens further applications for these measurement methods to high speed deformation and strain analysis, e.g. in the fields of material testing, fracture mechanics, advanced materials and component testing. The high resolution of the deformation measurements in space and time opens a wide range of applications for vibration analysis of objects. Since the system determines the absolute position and displacements of the object in space, it is capable of measuring high amplitudes and even objects with rigid body movements. The absolute resolution depends on the field of view and is scalable. Calibration of the optical setup is a crucial point which will be discussed in detail. Examples of the analysis of harmonic vibration and transient events from material research and industrial applications are presented. The results show typical features of the system.

  11. Cellular Image Analysis and Imaging by Flow Cytometry

    PubMed Central

    Basiji, David A.; Ortyn, William E.; Liang, Luchuan; Venkatachalam, Vidya; Morrissey, Philip

    2007-01-01

    Synopsis Imaging flow cytometry combines the statistical power and fluorescence sensitivity of standard flow cytometry with the spatial resolution and quantitative morphology of digital microscopy. The technique is a good fit for clinical applications by providing a convenient means for imaging and analyzing cells directly in bodily fluids. Examples are provided of the discrimination of cancerous from normal mammary epithelial cells and the high throughput quantitation of FISH probes in human peripheral blood mononuclear cells. The FISH application will be further enhanced by the integration of extended depth of field imaging technology with the current optical system. PMID:17658411

  12. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  13. Thermal image analysis for detecting facemask leakage

    NASA Astrophysics Data System (ADS)

    Dowdall, Jonathan B.; Pavlidis, Ioannis T.; Levine, James

    2005-03-01

    Due to the modern advent of near ubiquitous accessibility to rapid international transportation the epidemiologic trends of highly communicable diseases can be devastating. With the recent emergence of diseases matching this pattern, such as Severe Acute Respiratory Syndrome (SARS), an area of overt concern has been the transmission of infection through respiratory droplets. Approved facemasks are typically effective physical barriers for preventing the spread of viruses through droplets, but breaches in a mask"s integrity can lead to an elevated risk of exposure and subsequent infection. Quality control mechanisms in place during the manufacturing process insure that masks are defect free when leaving the factory, but there remains little to detect damage caused by transportation or during usage. A system that could monitor masks in real-time while they were in use would facilitate a more secure environment for treatment and screening. To fulfill this necessity, we have devised a touchless method to detect mask breaches in real-time by utilizing the emissive properties of the mask in the thermal infrared spectrum. Specifically, we use a specialized thermal imaging system to detect minute air leakage in masks based on the principles of heat transfer and thermodynamics. The advantage of this passive modality is that thermal imaging does not require contact with the subject and can provide instant visualization and analysis. These capabilities can prove invaluable for protecting personnel in scenarios with elevated levels of transmission risk such as hospital clinics, border check points, and airports.

  14. Image analysis of nucleated red blood cells.

    PubMed

    Zajicek, G; Shohat, M; Melnik, Y; Yeger, A

    1983-08-01

    Bone marrow smears stained with Giemsa were scanned with a video camera under computer control. Forty-two cells representing the six differentiation classes of the red bone marrow were sampled. Each cell was digitized into 70 X 70 pixels, each pixel representing a square area of 0.4 micron2 in the original image. The pixel gray values ranged between 0 and 255. Zero stood for white, 255 represented black, while the numbers in between stood for the various shades of gray. After separation and smoothing the images were processed with a Sobel operator outlining the points of steepest gray level change in the cell. These points constitute a closed curve denominated as inner cell boundary, separating the cell into an inner and an outer region. Two types of features were extracted from each cell: form features, e.g., area and length, and gray level features. Twenty-two features were tested for their discriminative merit. After selecting 16, the discriminant analysis program classified correctly all 42 cells into the 6 classes.

  15. Vision-sensing image analysis for GTAW process control

    SciTech Connect

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  16. Image analysis by integration of disparate information

    NASA Technical Reports Server (NTRS)

    Lemoigne, Jacqueline

    1993-01-01

    Image analysis often starts with some preliminary segmentation which provides a representation of the scene needed for further interpretation. Segmentation can be performed in several ways, which are categorized as pixel based, edge-based, and region-based. Each of these approaches are affected differently by various factors, and the final result may be improved by integrating several or all of these methods, thus taking advantage of their complementary nature. In this paper, we propose an approach that integrates pixel-based and edge-based results by utilizing an iterative relaxation technique. This approach has been implemented on a massively parallel computer and tested on some remotely sensed imagery from the Landsat-Thematic Mapper (TM) sensor.

  17. Computerised anthropomorphometric analysis of images: case report.

    PubMed

    Ventura, F; Zacheo, A; Ventura, A; Pala, A

    2004-12-02

    The personal identification of living subjects through video filmed images can occasionally be necessary, particularly in the following circumstances: (1) the need to identify unknown subjects by comparing two-dimensional images of someone of known identity with the subject. (2) The need to identify subjects taken in photographs or recorded on video camera by using a comparison with individuals of known identity. The final aim of our research was that of analysing a video clip of a bank robbery and to determine whether one of the subjects was identifiable with one of the suspects. Following the correct methodology for personal identification, the original videotape was first analysed, relating to the robbery carried out in the bank so as to study the characteristics of the criminal action and to pinpoint the best scenes for an antropomorphometrical analysis. The scene of the crime was therefore reconstructed by bringing the suspect back to the bank where the robbery took place, who was then filmed with the same closed circuit video cameras and made to assume positions as close as possible to those of the bank robber to be identified. Taking frame no. 17, points of comparable similarity were identified on the face and right ear of the perpetrator of the crime and the same points of similarity identified on the face of the suspect: right and left eyebrows, right and left eyes, "glabella", nose, mouth, chin, fold between nose and upper lip, right ear, elix, tragus,"fossetta", "conca" and lobule. After careful comparative morphometric computer analysis, it was concluded that none of the 17 points of similarity showed the same anthropomorphology (points of negative similarity). It is reasonable to sustain that 17 points of negative similarity (or non coincidental points) is sufficient to exclude the identity of the person compared with the other.

  18. Multidimensional Image Analysis for High Precision Radiation Therapy.

    PubMed

    Arimura, Hidetaka; Soufi, Mazen; Haekal, Mohammad

    2017-01-01

    High precision radiation therapy (HPRT) has been improved by utilizing conventional image engineering technologies. However, different frameworks are necessary for further improvement of HPRT. This review paper attempted to define the multidimensional image and what multidimensional image analysis is, which may be feasible for increasing the accuracy of HPRT. A number of researches in radiation therapy field have been introduced to understand the multidimensional image analysis. Multidimensional image analysis could greatly assist clinical staffs in radiation therapy planning, treatment, and prediction of treatment outcomes.

  19. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    SciTech Connect

    STOYANOVA,R.S.; OCHS,M.F.; BROWN,T.R.; ROONEY,W.D.; LI,X.; LEE,J.H.; SPRINGER,C.S.

    1999-05-22

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content.

  20. High resolution ultraviolet imaging spectrometer for latent image analysis.

    PubMed

    Lyu, Hang; Liao, Ningfang; Li, Hongsong; Wu, Wenmin

    2016-03-21

    In this work, we present a close-range ultraviolet imaging spectrometer with high spatial resolution, and reasonably high spectral resolution. As the transmissive optical components cause chromatic aberration in the ultraviolet (UV) spectral range, an all-reflective imaging scheme is introduced to promote the image quality. The proposed instrument consists of an oscillating mirror, a Cassegrain objective, a Michelson structure, an Offner relay, and a UV enhanced CCD. The finished spectrometer has a spatial resolution of 29.30μm on the target plane; the spectral scope covers both near and middle UV band; and can obtain approximately 100 wavelength samples over the range of 240~370nm. The control computer coordinates all the components of the instrument and enables capturing a series of images, which can be reconstructed into an interferogram datacube. The datacube can be converted into a spectrum datacube, which contains spectral information of each pixel with many wavelength samples. A spectral calibration is carried out by using a high pressure mercury discharge lamp. A test run demonstrated that this interferometric configuration can obtain high resolution spectrum datacube. The pattern recognition algorithm is introduced to analyze the datacube and distinguish the latent traces from the base materials. This design is particularly good at identifying the latent traces in the application field of forensic imaging.

  1. Macroscopic assessment of pulmonary emphysema by image analysis.

    PubMed Central

    Gevenois, P A; Zanen, J; de Maertelaer, V; De Vuyst, P; Dumortier, P; Yernault, J C

    1995-01-01

    AIMS--To propose a computerised image analysis based method for measuring, on paper mounted lung sections, the area macroscopically occupied by emphysema. METHODS--The study was based on the assessment of 69 lung sections prepared following a modified Gough-Wentworth technique. The results obtained from image analysis, point counting, and panel grading methods were compared, as was the repeatability of image analysis and panel grading. RESULTS--The results from image analysis and from point counting were not significantly different (p = 0.609) and significant quadratic regressions (r = 0.96, p < 0.001) were found between measurements from image analysis and from panel grading, the computerised technique being shown to be the most reproducible. CONCLUSIONS--Image analysis is a valuable and reproducible method to measure the area of lung macroscopically involved by emphysema. PMID:7615849

  2. Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.

    PubMed

    Arganda-Carreras, Ignacio; Andrey, Philippe

    2017-01-01

    With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.

  3. A framework for joint image-and-shape analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Tannenbaum, Allen; Bouix, Sylvain

    2014-03-01

    Techniques in medical image analysis are many times used for the comparison or regression on the intensities of images. In general, the domain of the image is a given Cartesian grids. Shape analysis, on the other hand, studies the similarities and differences among spatial objects of arbitrary geometry and topology. Usually, there is no function defined on the domain of shapes. Recently, there has been a growing needs for defining and analyzing functions defined on the shape space, and a coupled analysis on both the shapes and the functions defined on them. Following this direction, in this work we present a coupled analysis for both images and shapes. As a result, the statistically significant discrepancies in both the image intensities as well as on the underlying shapes are detected. The method is applied on both brain images for the schizophrenia and heart images for atrial fibrillation patients.

  4. Image based SAR product simulation for analysis

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  5. SAR Image Texture Analysis of Oil Spill

    NASA Astrophysics Data System (ADS)

    Ma, Long; Li, Ying; Liu, Yu

    Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they have serious affect on fragile marine and coastal ecosystem. In order to implement an emergency in case of oil spills, it is necessary to monitor oil spill using remote sensing. Spaceborne SAR is considered a promising method to monitor oil spill, which causes attention from many researchers. However, research in SAR image texture analysis of oil spill is rarely reported. On 7 December 2007, a crane-carrying barge hit the Hong Kong-registered tanker "Hebei Spirit", which released an estimated 10,500 metric tons of crude oil into the sea. The texture features on this oil spill were acquired based on extracted GLCM (Grey Level Co-occurrence Matrix) by using SAR as data source. The affected area was extracted successfully after evaluating capabilities of different texture features to monitor the oil spill. The results revealed that the texture is an important feature for oil spill monitoring. Key words: oil spill, texture analysis, SAR

  6. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1982-01-01

    Work done on evaluating the geometric and radiometric quality of early LANDSAT-4 sensor data is described. Band to band and channel to channel registration evaluations were carried out using a line correlator. Visual blink comparisons were run on an image display to observe band to band registration over 512 x 512 pixel blocks. The results indicate a .5 pixel line misregistration between the 1.55 to 1.75, 2.08 to 2.35 micrometer bands and the first four bands. Also a four 30M line and column misregistration of the thermal IR band was observed. Radiometric evaluation included mean and variance analysis of individual detectors and principal components analysis. Results indicate that detector bias for all bands is very close or within tolerance. Bright spots were observed in the thermal IR band on an 18 line by 128 pixel grid. No explanation for this was pursued. The general overall quality of the TM was judged to be very high.

  7. A virtual laboratory for medical image analysis.

    PubMed

    Olabarriaga, Sílvia D; Glatard, Tristan; de Boer, Piter T

    2010-07-01

    This paper presents the design, implementation, and usage of a virtual laboratory for medical image analysis. It is fully based on the Dutch grid, which is part of the Enabling Grids for E-sciencE (EGEE) production infrastructure and driven by the gLite middleware. The adopted service-oriented architecture enables decoupling the user-friendly clients running on the user's workstation from the complexity of the grid applications and infrastructure. Data are stored on grid resources and can be browsed/viewed interactively by the user with the Virtual Resource Browser (VBrowser). Data analysis pipelines are described as Scufl workflows and enacted on the grid infrastructure transparently using the MOTEUR workflow management system. VBrowser plug-ins allow for easy experiment monitoring and error detection. Because of the strict compliance to the grid authentication model, all operations are performed on behalf of the user, ensuring basic security and facilitating collaboration across organizations. The system has been operational and in daily use for eight months (December 2008), with six users, leading to the submission of 9000 jobs/month in average and the production of several terabytes of data.

  8. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  9. Ripening of salami: assessment of colour and aspect evolution using image analysis and multivariate image analysis.

    PubMed

    Fongaro, Lorenzo; Alamprese, Cristina; Casiraghi, Ernestina

    2015-03-01

    During ripening of salami, colour changes occur due to oxidation phenomena involving myoglobin. Moreover, shrinkage due to dehydration results in aspect modifications, mainly ascribable to fat aggregation. The aim of this work was the application of image analysis (IA) and multivariate image analysis (MIA) techniques to the study of colour and aspect changes occurring in salami during ripening. IA results showed that red, green, blue, and intensity parameters decreased due to the development of a global darker colour, while Heterogeneity increased due to fat aggregation. By applying MIA, different salami slice areas corresponding to fat and three different degrees of oxidised meat were identified and quantified. It was thus possible to study the trend of these different areas as a function of ripening, making objective an evaluation usually performed by subjective visual inspection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  11. Forensic Analysis of Digital Image Tampering

    DTIC Science & Technology

    2004-12-01

    will pave the way for Chapter 3, which deals with the methodology of an experimental design for image forgery detection . 2.2 Digital Watermarking ...example is presented to determine what effects an invisible watermark has on the results of each detection method. The host image is similar to that in...Digital Watermarking ............................................................................................... 7 2.3 Unknown Image Origin

  12. Medical Image Analysis by Cognitive Information Systems - a Review.

    PubMed

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.

  13. Image Retrieval: Theoretical Analysis and Empirical User Studies on Accessing Information in Images.

    ERIC Educational Resources Information Center

    Ornager, Susanne

    1997-01-01

    Discusses indexing and retrieval for effective searches of digitized images. Reports on an empirical study about criteria for analysis and indexing digitized images, and the different types of user queries done in newspaper image archives in Denmark. Concludes that it is necessary that the indexing represent both a factual and an expressional…

  14. Image Retrieval: Theoretical Analysis and Empirical User Studies on Accessing Information in Images.

    ERIC Educational Resources Information Center

    Ornager, Susanne

    1997-01-01

    Discusses indexing and retrieval for effective searches of digitized images. Reports on an empirical study about criteria for analysis and indexing digitized images, and the different types of user queries done in newspaper image archives in Denmark. Concludes that it is necessary that the indexing represent both a factual and an expressional…

  15. A guide to human in vivo microcirculatory flow image analysis.

    PubMed

    Massey, Michael J; Shapiro, Nathan I

    2016-02-10

    Various noninvasive microscopic camera technologies have been used to visualize the sublingual microcirculation in patients. We describe a comprehensive approach to bedside in vivo sublingual microcirculation video image capture and analysis techniques in the human clinical setting. We present a user perspective and guide suitable for clinical researchers and developers interested in the capture and analysis of sublingual microcirculatory flow videos. We review basic differences in the cameras, optics, light sources, operation, and digital image capture. We describe common techniques for image acquisition and discuss aspects of video data management, including data transfer, metadata, and database design and utilization to facilitate the image analysis pipeline. We outline image analysis techniques and reporting including video preprocessing and image quality evaluation. Finally, we propose a framework for future directions in the field of microcirculatory flow videomicroscopy acquisition and analysis. Although automated scoring systems have not been sufficiently robust for widespread clinical or research use to date, we discuss promising innovations that are driving new development.

  16. Wave-Optics Analysis of Pupil Imaging

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Bos, Brent J.

    2006-01-01

    Pupil imaging performance is analyzed from the perspective of physical optics. A multi-plane diffraction model is constructed by propagating the scalar electromagnetic field, surface by surface, along the optical path comprising the pupil imaging optical system. Modeling results are compared with pupil images collected in the laboratory. The experimental setup, although generic for pupil imaging systems in general, has application to the James Webb Space Telescope (JWST) optical system characterization where the pupil images are used as a constraint to the wavefront sensing and control process. Practical design considerations follow from the diffraction modeling which are discussed in the context of the JWST Observatory.

  17. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  18. ImageJ-MATLAB: a bidirectional framework for scientific image analysis interoperability.

    PubMed

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2017-02-15

    ImageJ-MATLAB is a lightweight Java library facilitating bi-directional interoperability between MATLAB and ImageJ. By defining a standard for translation between matrix and image data structures, researchers are empowered to select the best tool for their image-analysis tasks. Freely available extension to ImageJ2 ( http://imagej.net/Downloads ). Installation and use instructions available at http://imagej.net/MATLAB_Scripting. Tested with ImageJ 2.0.0-rc-54 , Java 1.8.0_66 and MATLAB R2015b. eliceiri@wisc.edu. Supplementary data are available at Bioinformatics online.

  19. Image analysis for dental bone quality assessment using CBCT imaging

    NASA Astrophysics Data System (ADS)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  20. Common feature discriminant analysis for matching infrared face images to optical face images.

    PubMed

    Li, Zhifeng; Gong, Dihong; Qiao, Yu; Tao, Dacheng

    2014-06-01

    In biometrics research and industry, it is critical yet a challenge to match infrared face images to optical face images. The major difficulty lies in the fact that a great discrepancy exists between the infrared face image and corresponding optical face image because they are captured by different devices (optical imaging device and infrared imaging device). This paper presents a new approach called common feature discriminant analysis to reduce this great discrepancy and improve optical-infrared face recognition performance. In this approach, a new learning-based face descriptor is first proposed to extract the common features from heterogeneous face images (infrared face images and optical face images), and an effective matching method is then applied to the resulting features to obtain the final decision. Extensive experiments are conducted on two large and challenging optical-infrared face data sets to show the superiority of our approach over the state-of-the-art.

  1. Analysis of the First NIF Neutron Images

    NASA Astrophysics Data System (ADS)

    Wilson, D. C.; Batha, S.; Grim, G. P.; Guler, N.; Kline, J. L.; Kyrala, G. A.; Merrill, F. E.; Morgan, G. L.; Vinyard, N. S.; Volegov, P. L.; Bradley, D. K.; Clark, D. S.; Dixit, S. N.; Fittinghoff, D. N.; Glenn, S. M.; Glenzer, S.; Izumi, N.; Jones, O. S.; Le Pape, S.; Ma, T.; MacKinnon, A. J.; Sepke, S. M.; Spears, B. K.; Tommasini, R.; McKenty, P.

    2011-10-01

    Neutron imaging at the National Igntion Facility obtained its first images from both directly laser driven and X-radiation driven implosions. A directly driven DT filled glass microballoon gave an oblate image (P2/P0 = -45%) whose size (P0 = 70 μm) fit within the X-ray images. Simulations using the polar direct drive laser pointing give a round image of P0 ~95 μm. However as the electron flux limiter is reduced from 0.06 to 0.03 the image becomes oblate. The observed asymmetry can be reproduced by transferring ~10% of the energy from the outer laser beams to the inner. Radiation driven implosions of ignition capsules with 20%D, and 50%D produced ~ 30 μm radius oblate images in 12-15 MeV neutrons. Images in 10-12 MeV neutrons, which have experienced one scattering in the fuel and number ~ 4% of the primaries, showed larger images (~44-56 μm). Image sizes indicate the compression of the fuel and are consistent with observed 10-12/13-15MeV yield ratios. Work funded by the USDOE at LANL, LLNL, NSTEC and LLE.

  2. Analysis of Anechoic Chamber Testing of the Hurricane Imaging Radiometer

    NASA Technical Reports Server (NTRS)

    Fenigstein, David; Ruf, Chris; James, Mark; Simmons, David; Miller, Timothy; Buckley, Courtney

    2010-01-01

    The Hurricane Imaging Radiometer System (HIRAD) is a new airborne passive microwave remote sensor developed to observe hurricanes. HIRAD incorporates synthetic thinned array radiometry technology, which use Fourier synthesis to reconstruct images from an array of correlated antenna elements. The HIRAD system response to a point emitter has been measured in an anechoic chamber. With this data, a Fourier inversion image reconstruction algorithm has been developed. Performance analysis of the apparatus is presented, along with an overview of the image reconstruction algorithm

  3. Antenna trajectory error analysis in backprojection-based SAR images

    NASA Astrophysics Data System (ADS)

    Wang, Ling; Yazıcı, Birsen; Yanik, H. Cagri

    2014-06-01

    We present an analysis of the positioning errors in Backprojection (BP)-based Synthetic Aperture Radar (SAR) images due to antenna trajectory errors for a monostatic SAR traversing a straight linear trajectory. Our analysis is developed using microlocal analysis, which can provide an explicit quantitative relationship between the trajectory error and the positioning error in BP-based SAR images. The analysis is applicable to arbitrary trajectory errors in the antenna and can be extended to arbitrary imaging geometries. We present numerical simulations to demonstrate our analysis.

  4. Image analysis of neuropsychological test responses

    NASA Astrophysics Data System (ADS)

    Smith, Stephen L.; Hiller, Darren L.

    1996-04-01

    This paper reports recent advances in the development of an automated approach to neuropsychological testing. High performance image analysis algorithms have been developed as part of a convenient and non-invasive computer-based system to provide an objective assessment of patient responses to figure-copying tests. Tests of this type are important in determining the neurological function of patients following stroke through evaluation of their visuo-spatial performance. Many conventional neuropsychological tests suffer from the serious drawback that subjective judgement on the part of the tester is required in the measurement of the patient's response which leads to a qualitative neuropsychological assessment that can be both inconsistent and inaccurate. Results for this automated approach are presented for three clinical populations: patients suffering right hemisphere stroke are compared with adults with no known neurological disorder and a population comprising normal school children of 11 years is presented to demonstrate the sensitivity of the technique. As well as providing a more reliable and consistent diagnosis this technique is sufficiently sensitive to monitor a patient's progress over a period of time and will provide the neuropsychologist with a practical means of evaluating the effectiveness of therapy or medication administered as part of a rehabilitation program.

  5. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  6. Slide Set: reproducible image analysis and batch processing with ImageJ

    PubMed Central

    Nanes, Benjamin A.

    2015-01-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets that are common in biology. This paper introduces Slide Set, a framework for reproducible image analysis and batch processing with ImageJ. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution. PMID:26554504

  7. Hierarchical manifold learning for regional image analysis.

    PubMed

    Bhatia, Kanwal K; Rao, Anil; Price, Anthony N; Wolz, Robin; Hajnal, Joseph V; Rueckert, Daniel

    2014-02-01

    We present a novel method of hierarchical manifold learning which aims to automatically discover regional properties of image datasets. While traditional manifold learning methods have become widely used for dimensionality reduction in medical imaging, they suffer from only being able to consider whole images as single data points. We extend conventional techniques by additionally examining local variations, in order to produce spatially-varying manifold embeddings that characterize a given dataset. This involves constructing manifolds in a hierarchy of image patches of increasing granularity, while ensuring consistency between hierarchy levels. We demonstrate the utility of our method in two very different settings: 1) to learn the regional correlations in motion within a sequence of time-resolved MR images of the thoracic cavity; 2) to find discriminative regions of 3-D brain MR images associated with neurodegenerative disease.

  8. Image and Data-analysis Tools For Paleoclimatic Reconstructions

    NASA Astrophysics Data System (ADS)

    Pozzi, M.

    It comes here proposed a directory of instruments and computer science resources chosen in order to resolve the problematic ones that regard the paleoclimatic recon- structions. They will come discussed in particular the following points: 1) Numerical analysis of paleo-data (fossils abundances, species analyses, isotopic signals, chemical-physical parameters, biological data): a) statistical analyses (uni- variate, diversity, rarefaction, correlation, ANOVA, F and T tests, Chi^2) b) multidi- mensional analyses (principal components, corrispondence, cluster analysis, seriation, discriminant, autocorrelation, spectral analysis) neural analyses (backpropagation net, kohonen feature map, hopfield net genetic algorithms) 2) Graphical analysis (visu- alization tools) of paleo-data (quantitative and qualitative fossils abundances, species analyses, isotopic signals, chemical-physical parameters): a) 2-D data analyses (graph, histogram, ternary, survivorship) b) 3-D data analyses (direct volume rendering, iso- surfaces, segmentation, surface reconstruction, surface simplification,generation of tetrahedral grids). 3) Quantitative and qualitative digital image analysis (macro and microfossils image analysis, Scanning Electron Microscope. and Optical Polarized Microscope images capture and analysis, morphometric data analysis, 3-D reconstruc- tions): a) 2D image analysis (correction of image defects, enhancement of image de- tail, converting texture and directionality to grey scale or colour differences, visual enhancement using pseudo-colour, pseudo-3D, thresholding of image features, binary image processing, measurements, stereological measurements, measuring features on a white background) b) 3D image analysis (basic stereological procedures, two dimen- sional structures; area fraction from the point count, volume fraction from the point count, three dimensional structures: surface area and the line intercept count, three dimensional microstructures; line length and the

  9. Quantitation of vital bleaching by computer analysis of photographic images.

    PubMed

    Bentley, C; Leonard, R H; Nelson, C F; Bentley, S A

    1999-06-01

    The authors investigated the use of computer processing of photographic images to monitor changes in tooth brightness after nightguard vital bleaching, or NGVB. Photographs of shade guides and clinical cases (patients' teeth) were taken on 35-millimeter film with electronic flash illumination and processed commercially. A slide scanner was used to digitize images as red, green and blue, or RGB, files, with constant brightness, contrast and linearity settings; the images were then analyzed with commercial software. Relevant image components (that is, teeth or shade guide tabs) were separated, and histograms of various numerical color descriptors were generated for each image component. Analysis of shade tab images showed that the mean pixel intensity for the RGB blue channel, or MPIb, was the most satisfactory brightness descriptor, with clear sequential MPIb increments from lighter to darker shades in each series of colors (A through D) and close correlation with the manufacturer's brightness scale (r = .83). Mathematical analysis of MPIb data for shade tabs in the same image yielded a brightness index that was reproducible and correlated well with the manufacturer's brightness scale. Sequential measurements of this index in three subjects whose teeth were bleached with carbamide peroxide for 14 days correlated well with assessments made by visual shade guide comparisons. The authors conclude that computer analysis of digitized photographic images with internal color controls provides an index of tooth brightness that is reproducible from image to image. A brightness index derived from computer analysis of digitized photographic images may be useful for monitoring the effectiveness of NGVB.

  10. Quantification and description of fracture network by MRI image analysis.

    PubMed

    Balzarini, M; Nicula, S; Mattiello, D; Aliverti, E

    2001-01-01

    The contribution of fractures to total porosity and their geometrical descriptions have been studied by Image Analysis applied to 1H Magnetic Resonance Imaging (MRI). Reservoirs of different lithology were acquired with MSME 2D quantitative and 3D sequences. An image analysis procedure, developed ad hoc, was then applied to these acquisitions and the petrophysical parameters computed. These parameters range from fracture porosity to fracture density.

  11. Holographic Interferometry and Image Analysis for Aerodynamic Testing

    DTIC Science & Technology

    1980-09-01

    tunnels, (2) development of automated image analysis techniques for reducing quantitative flow-field data from holographic interferograms, and (3...investigation and development of software for the application of digital image analysis to other photographic techniques used in wind tunnel testing.

  12. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  13. Computer-based image analysis in breast pathology.

    PubMed

    Gandomkar, Ziba; Brennan, Patrick C; Mello-Thoms, Claudia

    2016-01-01

    Whole slide imaging (WSI) has the potential to be utilized in telepathology, teleconsultation, quality assurance, clinical education, and digital image analysis to aid pathologists. In this paper, the potential added benefits of computer-assisted image analysis in breast pathology are reviewed and discussed. One of the major advantages of WSI systems is the possibility of doing computer-based image analysis on the digital slides. The purpose of computer-assisted analysis of breast virtual slides can be (i) segmentation of desired regions or objects such as diagnostically relevant areas, epithelial nuclei, lymphocyte cells, tubules, and mitotic figures, (ii) classification of breast slides based on breast cancer (BCa) grades, the invasive potential of tumors, or cancer subtypes, (iii) prognosis of BCa, or (iv) immunohistochemical quantification. While encouraging results have been achieved in this area, further progress is still required to make computer-based image analysis of breast virtual slides acceptable for clinical practice.

  14. Computer-based image analysis in breast pathology

    PubMed Central

    Gandomkar, Ziba; Brennan, Patrick C.; Mello-Thoms, Claudia

    2016-01-01

    Whole slide imaging (WSI) has the potential to be utilized in telepathology, teleconsultation, quality assurance, clinical education, and digital image analysis to aid pathologists. In this paper, the potential added benefits of computer-assisted image analysis in breast pathology are reviewed and discussed. One of the major advantages of WSI systems is the possibility of doing computer-based image analysis on the digital slides. The purpose of computer-assisted analysis of breast virtual slides can be (i) segmentation of desired regions or objects such as diagnostically relevant areas, epithelial nuclei, lymphocyte cells, tubules, and mitotic figures, (ii) classification of breast slides based on breast cancer (BCa) grades, the invasive potential of tumors, or cancer subtypes, (iii) prognosis of BCa, or (iv) immunohistochemical quantification. While encouraging results have been achieved in this area, further progress is still required to make computer-based image analysis of breast virtual slides acceptable for clinical practice. PMID:28066683

  15. Low-cost image analysis system

    SciTech Connect

    Lassahn, G.D.

    1995-01-01

    The author has developed an Automatic Target Recognition system based on parallel processing using transputers. This approach gives a powerful, fast image processing system at relatively low cost. This system scans multi-sensor (e.g., several infrared bands) image data to find any identifiable target, such as physical object or a type of vegetation.

  16. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  17. Analysis of Images from Experiments Investigating Fragmentation of Materials

    SciTech Connect

    Kamath, C; Hurricane, O

    2007-09-10

    Image processing techniques have been used extensively to identify objects of interest in image data and extract representative characteristics for these objects. However, this can be a challenge due to the presence of noise in the images and the variation across images in a dataset. When the number of images to be analyzed is large, the algorithms used must also be relatively insensitive to the choice of parameters and lend themselves to partial or full automation. This not only avoids manual analysis which can be time consuming and error-prone, but also makes the analysis reproducible, thus enabling comparisons between images which have been processed in an identical manner. In this paper, we describe our approach to extracting features for objects of interest in experimental images. Focusing on the specific problem of fragmentation of materials, we show how we can extract statistics for the fragments and the gaps between them.

  18. Dehazing method through polarimetric imaging and multi-scale analysis

    NASA Astrophysics Data System (ADS)

    Cao, Lei; Shao, Xiaopeng; Liu, Fei; Wang, Lin

    2015-05-01

    An approach for haze removal utilizing polarimetric imaging and multi-scale analysis has been developed to solve one problem that haze weather weakens the interpretation of remote sensing because of the poor visibility and short detection distance of haze images. On the one hand, the polarization effects of the airlight and the object radiance in the imaging procedure has been considered. On the other hand, one fact that objects and haze possess different frequency distribution properties has been emphasized. So multi-scale analysis through wavelet transform has been employed to make it possible for low frequency components that haze presents and high frequency coefficients that image details or edges occupy are processed separately. According to the measure of the polarization feather by Stokes parameters, three linear polarized images (0°, 45°, and 90°) have been taken on haze weather, then the best polarized image min I and the worst one max I can be synthesized. Afterwards, those two polarized images contaminated by haze have been decomposed into different spatial layers with wavelet analysis, and the low frequency images have been processed via a polarization dehazing algorithm while high frequency components manipulated with a nonlinear transform. Then the ultimate haze-free image can be reconstructed by inverse wavelet reconstruction. Experimental results verify that the dehazing method proposed in this study can strongly promote image visibility and increase detection distance through haze for imaging warning and remote sensing systems.

  19. Object-based image analysis using multiscale connectivity.

    PubMed

    Braga-Neto, Ulisses; Goutsias, John

    2005-06-01

    This paper introduces a novel approach for image analysis based on the notion of multiscale connectivity. We use the proposed approach to design several novel tools for object-based image representation and analysis which exploit the connectivity structure of images in a multiscale fashion. More specifically, we propose a nonlinear pyramidal image representation scheme, which decomposes an image at different scales by means of multiscale grain filters. These filters gradually remove connected components from an image that fail to satisfy a given criterion. We also use the concept of multiscale connectivity to design a hierarchical data partitioning tool. We employ this tool to construct another image representation scheme, based on the concept of component trees, which organizes partitions of an image in a hierarchical multiscale fashion. In addition, we propose a geometrically-oriented hierarchical clustering algorithm which generalizes the classical single-linkage algorithm. Finally, we propose two object-based multiscale image summaries, reminiscent of the well-known (morphological) pattern spectrum, which can be useful in image analysis and image understanding applications.

  20. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Comparison of sonochemiluminescence images using image analysis techniques and identification of acoustic pressure fields via simulation.

    PubMed

    Tiong, T Joyce; Chandesa, Tissa; Yap, Yeow Hong

    2017-05-01

    One common method to determine the existence of cavitational activity in power ultrasonics systems is by capturing images of sonoluminescence (SL) or sonochemiluminescence (SCL) in a dark environment. Conventionally, the light emitted from SL or SCL was detected based on the number of photons. Though this method is effective, it could not identify the sonochemical zones of an ultrasonic systems. SL/SCL images, on the other hand, enable identification of 'active' sonochemical zones. However, these images often provide just qualitative data as the harvesting of light intensity data from the images is tedious and require high resolution images. In this work, we propose a new image analysis technique using pseudo-colouring images to quantify the SCL zones based on the intensities of the SCL images and followed by comparison of the active SCL zones with COMSOL simulated acoustic pressure zones.

  2. A linear mixture analysis-based compression for hyperspectral image analysis

    SciTech Connect

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  3. Whole-slide imaging and automated image analysis: considerations and opportunities in the practice of pathology.

    PubMed

    Webster, J D; Dunstan, R W

    2014-01-01

    Digital pathology, the practice of pathology using digitized images of pathologic specimens, has been transformed in recent years by the development of whole-slide imaging systems, which allow for the evaluation and interpretation of digital images of entire histologic sections. Applications of whole-slide imaging include rapid transmission of pathologic data for consultations and collaborations, standardization and distribution of pathologic materials for education, tissue specimen archiving, and image analysis of histologic specimens. Histologic image analysis allows for the acquisition of objective measurements of histomorphologic, histochemical, and immunohistochemical properties of tissue sections, increasing both the quantity and quality of data obtained from histologic assessments. Currently, numerous histologic image analysis software solutions are commercially available. Choosing the appropriate solution is dependent on considerations of the investigative question, computer programming and image analysis expertise, and cost. However, all studies using histologic image analysis require careful consideration of preanalytical variables, such as tissue collection, fixation, and processing, and experimental design, including sample selection, controls, reference standards, and the variables being measured. The fields of digital pathology and histologic image analysis are continuing to evolve, and their potential impact on pathology is still growing. These methodologies will increasingly transform the practice of pathology, allowing it to mature toward a quantitative science. However, this maturation requires pathologists to be at the forefront of the process, ensuring their appropriate application and the validity of their results. Therefore, histologic image analysis and the field of pathology should co-evolve, creating a symbiotic relationship that results in high-quality reproducible, objective data.

  4. Image analysis of vocal fold histology

    NASA Astrophysics Data System (ADS)

    Reinisch, Lou; Garrett, C. Gaelyn

    2001-05-01

    To visualize the concentration gradients of collagen, elastin and ground substance in histologic sections of vocal folds, an image enhancement scheme was devised. Slides stained with Movat's solution were viewed on a light microscope. The image was digitally photographed. Using commercially available software, all pixels within a color range are selected from the mucosa presented on the image. Using the Movat's pentachrome stain, yellow to yellow-brown pixels represented mature collagen, blue to blue-green pixels represented young collagen (collagen that is not fully cross-linked) and black to dark violet pixels represented elastin. From each of the color range selections, a black and white image was created. The pixels not within the color range were black. The selected pixels within the color range were white. The image was averaged and smoothed to produce 256 levels of gray with less spatial resolution. This new grey-scale image showed the concentration gradient. These images were further enhanced with contour lines surrounding equivalent levels of gray. This technique is helpful to compare the micro-anatomy of the vocal folds. For instance, we find large concentration of the collagen deep in the mucosa and adjacent to the vocalis muscle.

  5. Machine Learning Algorithms Implemented in Image Analysis

    PubMed Central

    Chen, J.; Renner, L.; Neuringer, M.; Cornea, A.

    2014-01-01

    A typical core facility is faced with a wide variety of experimental paradigms, samples, and images to be analyzed. They typically have one thing in common: a need to segment features of interest from the rest of the image. In many cases, for example fluorescence images with good contrast and signal to noise, intensity segmentation may be successful. Often, however, images may not be acquired in optimum conditions, or features of interest are not distinguished by intensity alone. Examples we encountered are: retina fundus photographs, histological stains, DAB immunohistochemistry, etc. We used machine learning algorithms as implemented in FIJI to isolate specific features in longitudinal retinal photographs of non-human primates. Images acquired over several years with different technologies, cameras and skills were analyzed to evaluate small changes with precision. The protocol used includes: Scale-Invariant feature Transform (SIFT) registration, Contrast Limited Adaptive Histogram Equalization (CLAHE) and Weka training. Variance of results for different images of the same time point and for different raters of the same images was less than 10% in most cases.

  6. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  7. Digital imaging techniques in experimental stress analysis

    NASA Technical Reports Server (NTRS)

    Peters, W. H.; Ranson, W. F.

    1982-01-01

    Digital imaging techniques are utilized as a measure of surface displacement components in laser speckle metrology. An image scanner which is interfaced to a computer records and stores in memory the laser speckle patterns of an object in a reference and deformed configuration. Subsets of the deformed images are numerically correlated with the references as a measure of surface displacements. Discrete values are determined around a closed contour for plane problems which then become input into a boundary integral equation method in order to calculate surface traction in the contour. Stresses are then calculated within this boundary. The solution procedure is illustrated by a numerical example of a case of uniform tension.

  8. MR brain image analysis in dementia: From quantitative imaging biomarkers to ageing brain models and imaging genetics.

    PubMed

    Niessen, Wiro J

    2016-10-01

    MR brain image analysis has constantly been a hot topic research area in medical image analysis over the past two decades. In this article, it is discussed how the field developed from the construction of tools for automatic quantification of brain morphology, function, connectivity and pathology, to creating models of the ageing brain in normal ageing and disease, and tools for integrated analysis of imaging and genetic data. The current and future role of the field in improved understanding of the development of neurodegenerative disease is discussed, and its potential for aiding in early and differential diagnosis and prognosis of different types of dementia. For the latter, the use of reference imaging data and reference models derived from large clinical and population imaging studies, and the application of machine learning techniques on these reference data, are expected to play a key role. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Automated thermal mapping techniques using chromatic image analysis

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  10. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  11. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  12. Trajectory analysis for magnetic particle imaging.

    PubMed

    Knopp, T; Biederer, S; Sattel, T; Weizenecker, J; Gleich, B; Borgert, J; Buzug, T M

    2009-01-21

    Recently a new imaging technique called magnetic particle imaging was proposed. The method uses the nonlinear response of magnetic nanoparticles when a time varying magnetic field is applied. Spatial encoding is achieved by moving a field-free point through an object of interest while the field strength in the vicinity of the point is high. A resolution in the submillimeter range is provided even for fast data acquisition sequences. In this paper, a simulation study is performed on different trajectories moving the field-free point through the field of view. The purpose is to provide mandatory information for the design of a magnetic particle imaging scanner. Trajectories are compared with respect to density, speed and image quality when applied in data acquisition. Since simulation of the involved physics is a time demanding task, moreover, an efficient implementation is presented utilizing caching techniques.

  13. Introducing PLIA: Planetary Laboratory for Image Analysis

    NASA Astrophysics Data System (ADS)

    Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.

    2005-08-01

    We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.

  14. Autonomous image data reduction by analysis and interpretation

    NASA Technical Reports Server (NTRS)

    Eberlein, Susan; Yates, Gigi; Ritter, Niles

    1988-01-01

    Image data is a critical component of the scientific information acquired by space missions. Compression of image data is required due to the limited bandwidth of the data transmission channel and limited memory space on the acquisition vehicle. This need becomes more pressing when dealing with multispectral data where each pixel may comprise 300 or more bytes. An autonomous, real time, on-board image analysis system for an exploratory vehicle such as a Mars Rover is developed. The completed system will be capable of interpreting image data to produce reduced representations of the image, and of making decisions regarding the importance of data based on current scientific goals. Data from multiple sources, including stereo images, color images, and multispectral data, are fused into single image representations. Analysis techniques emphasize artificial neural networks. Clusters are described by their outlines and class values. These analysis and compression techniques are coupled with decision making capacity for determining importance of each image region. Areas determined to be noise or uninteresting can be discarded in favor of more important areas. Thus limited resources for data storage and transmission are allocated to the most significant images.

  15. 5-ALA induced fluorescent image analysis of actinic keratosis

    NASA Astrophysics Data System (ADS)

    Cho, Yong-Jin; Bae, Youngwoo; Choi, Eung-Ho; Jung, Byungjo

    2010-02-01

    In this study, we quantitatively analyzed 5-ALA induced fluorescent images of actinic keratosis using digital fluorescent color and hyperspectral imaging modalities. UV-A was utilized to induce fluorescent images and actinic keratosis (AK) lesions were demarcated from surrounding the normal region with different methods. Eight subjects with AK lesion were participated in this study. In the hyperspectral imaging modality, spectral analysis method was utilized for hyperspectral cube image and AK lesions were demarcated from the normal region. Before image acquisition, we designated biopsy position for histopathology of AK lesion and surrounding normal region. Erythema index (E.I.) values on both regions were calculated from the spectral cube data. Image analysis of subjects resulted in two different groups: the first group with the higher fluorescence signal and E.I. on AK lesion than the normal region; the second group with lower fluorescence signal and without big difference in E.I. between two regions. In fluorescent color image analysis of facial AK, E.I. images were calculated on both normal and AK lesions and compared with the results of hyperspectral imaging modality. The results might indicate that the different intensity of fluorescence and E.I. among the subjects with AK might be interpreted as different phases of morphological and metabolic changes of AK lesions.

  16. Autonomous image data reduction by analysis and interpretation

    NASA Astrophysics Data System (ADS)

    Eberlein, Susan; Yates, Gigi; Ritter, Niles

    Image data is a critical component of the scientific information acquired by space missions. Compression of image data is required due to the limited bandwidth of the data transmission channel and limited memory space on the acquisition vehicle. This need becomes more pressing when dealing with multispectral data where each pixel may comprise 300 or more bytes. An autonomous, real time, on-board image analysis system for an exploratory vehicle such as a Mars Rover is developed. The completed system will be capable of interpreting image data to produce reduced representations of the image, and of making decisions regarding the importance of data based on current scientific goals. Data from multiple sources, including stereo images, color images, and multispectral data, are fused into single image representations. Analysis techniques emphasize artificial neural networks. Clusters are described by their outlines and class values. These analysis and compression techniques are coupled with decision-making capacity for determining importance of each image region. Areas determined to be noise or uninteresting can be discarded in favor of more important areas. Thus limited resources for data storage and transmission are allocated to the most significant images.

  17. Histology image analysis for carcinoma detection and grading

    PubMed Central

    He, Lei; Long, L. Rodney; Antani, Sameer; Thoma, George R.

    2012-01-01

    This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems. PMID:22436890

  18. Radar images analysis for scattering surfaces characterization

    NASA Astrophysics Data System (ADS)

    Piazza, Enrico

    1998-10-01

    According to the different problems and techniques related to the detection and recognition of airplanes and vehicles moving on the Airport surface, the present work mainly deals with the processing of images gathered by a high-resolution radar sensor. The radar images used to test the investigated algorithms are relative to sequence of images obtained in some field experiments carried out by the Electronic Engineering Department of the University of Florence. The radar is the Ka band radar operating in the'Leonardo da Vinci' Airport in Fiumicino (Rome). The images obtained from the radar scan converter are digitized and putted in x, y, (pixel) co- ordinates. For a correct matching of the images, these are corrected in true geometrical co-ordinates (meters) on the basis of fixed points on an airport map. Correlating the airplane 2-D multipoint template with actual radar images, the value of the signal in the points involved in the template can be extracted. Results for a lot of observation show a typical response for the main section of the fuselage and the wings. For the fuselage, the back-scattered echo is low at the prow, became larger near the center on the aircraft and than it decrease again toward the tail. For the wings the signal is growing with a pretty regular slope from the fuselage to the tips, where the signal is the strongest.

  19. Comparing methods for analysis of biomedical hyperspectral image data

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas J.; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter F.; Annamdevula, Naga S.; Rich, Thomas C.

    2017-02-01

    Over the past 2 decades, hyperspectral imaging technologies have been adapted to address the need for molecule-specific identification in the biomedical imaging field. Applications have ranged from single-cell microscopy to whole-animal in vivo imaging and from basic research to clinical systems. Enabling this growth has been the availability of faster, more effective hyperspectral filtering technologies and more sensitive detectors. Hence, the potential for growth of biomedical hyperspectral imaging is high, and many hyperspectral imaging options are already commercially available. However, despite the growth in hyperspectral technologies for biomedical imaging, little work has been done to aid users of hyperspectral imaging instruments in selecting appropriate analysis algorithms. Here, we present an approach for comparing the effectiveness of spectral analysis algorithms by combining experimental image data with a theoretical "what if" scenario. This approach allows us to quantify several key outcomes that characterize a hyperspectral imaging study: linearity of sensitivity, positive detection cut-off slope, dynamic range, and false positive events. We present results of using this approach for comparing the effectiveness of several common spectral analysis algorithms for detecting weak fluorescent protein emission in the midst of strong tissue autofluorescence. Results indicate that this approach should be applicable to a very wide range of applications, allowing a quantitative assessment of the effectiveness of the combined biology, hardware, and computational analysis for detecting a specific molecular signature.

  20. Identifying radiotherapy target volumes in brain cancer by image analysis.

    PubMed

    Cheng, Kun; Montgomery, Dean; Feng, Yang; Steel, Robin; Liao, Hanqing; McLaren, Duncan B; Erridge, Sara C; McLaughlin, Stephen; Nailon, William H

    2015-10-01

    To establish the optimal radiotherapy fields for treating brain cancer patients, the tumour volume is often outlined on magnetic resonance (MR) images, where the tumour is clearly visible, and mapped onto computerised tomography images used for radiotherapy planning. This process requires considerable clinical experience and is time consuming, which will continue to increase as more complex image sequences are used in this process. Here, the potential of image analysis techniques for automatically identifying the radiation target volume on MR images, and thereby assisting clinicians with this difficult task, was investigated. A gradient-based level set approach was applied on the MR images of five patients with grades II, III and IV malignant cerebral glioma. The relationship between the target volumes produced by image analysis and those produced by a radiation oncologist was also investigated. The contours produced by image analysis were compared with the contours produced by an oncologist and used for treatment. In 93% of cases, the Dice similarity coefficient was found to be between 60 and 80%. This feasibility study demonstrates that image analysis has the potential for automatic outlining in the management of brain cancer patients, however, more testing and validation on a much larger patient cohort is required.

  1. Identifying radiotherapy target volumes in brain cancer by image analysis

    PubMed Central

    Cheng, Kun; Montgomery, Dean; Feng, Yang; Steel, Robin; Liao, Hanqing; McLaren, Duncan B.; Erridge, Sara C.; McLaughlin, Stephen

    2015-01-01

    To establish the optimal radiotherapy fields for treating brain cancer patients, the tumour volume is often outlined on magnetic resonance (MR) images, where the tumour is clearly visible, and mapped onto computerised tomography images used for radiotherapy planning. This process requires considerable clinical experience and is time consuming, which will continue to increase as more complex image sequences are used in this process. Here, the potential of image analysis techniques for automatically identifying the radiation target volume on MR images, and thereby assisting clinicians with this difficult task, was investigated. A gradient-based level set approach was applied on the MR images of five patients with grades II, III and IV malignant cerebral glioma. The relationship between the target volumes produced by image analysis and those produced by a radiation oncologist was also investigated. The contours produced by image analysis were compared with the contours produced by an oncologist and used for treatment. In 93% of cases, the Dice similarity coefficient was found to be between 60 and 80%. This feasibility study demonstrates that image analysis has the potential for automatic outlining in the management of brain cancer patients, however, more testing and validation on a much larger patient cohort is required. PMID:26609418

  2. Pattern recognition software and techniques for biological image analysis.

    PubMed

    Shamir, Lior; Delaney, John D; Orlov, Nikita; Eckley, D Mark; Goldberg, Ilya G

    2010-11-24

    The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.

  3. Pattern Recognition Software and Techniques for Biological Image Analysis

    PubMed Central

    Shamir, Lior; Delaney, John D.; Orlov, Nikita; Eckley, D. Mark; Goldberg, Ilya G.

    2010-01-01

    The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays. PMID:21124870

  4. Research of second harmonic generation images based on texture analysis

    NASA Astrophysics Data System (ADS)

    Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan

    2014-09-01

    Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.

  5. [Evaluation of dental plaque by quantitative digital image analysis system].

    PubMed

    Huang, Z; Luan, Q X

    2016-04-18

    To analyze the plaque staining image by using image analysis software, to verify the maneuverability, practicability and repeatability of this technique, and to evaluate the influence of different plaque stains. In the study, 30 volunteers were enrolled from the new dental students of Peking University Health Science Center in accordance with the inclusion criteria. The digital images of the anterior teeth were acquired after plaque stained according to filming standardization.The image analysis was performed using Image Pro Plus 7.0, and the Quigley-Hein plaque indexes of the anterior teeth were evaluated. The plaque stain area percentage and the corresponding dental plaque index were highly correlated,and the Spearman correlation coefficient was 0.776 (P<0.01). Intraclass correlation coefficients of the tooth area and plaque area which two researchers used the software to calculate were 0.956 and 0.930 (P<0.01).The Bland-Altman analysis chart showed only a few spots outside the 95% consistency boundaries. The different plaque stains image analysis results showed that the difference of the tooth area measurements was not significant, while the difference of the plaque area measurements significant (P<0.01). This method is easy in operation and control,highly related to the calculated percentage of plaque area and traditional plaque index, and has good reproducibility.The different plaque staining method has little effect on image segmentation results.The sensitive plaque stain for image analysis is suggested.

  6. Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge-Based Image Analysis.

    DTIC Science & Technology

    1988-01-19

    approach for the analysis of aerial images. In this approach image analysis is performed ast three levels of abstraction, namely iconic or low-level... image analysis , symbolic or medium-level image analysis , and semantic or high-level image analysis . Domain dependent knowledge about prototypical urban

  7. Analysis of live cell images: Methods, tools and opportunities.

    PubMed

    Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens

    2017-02-15

    Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits.

  8. Digital Image Analysis for DETCHIP(®) Code Determination.

    PubMed

    Lyon, Marcus; Wilson, Mark V; Rouhier, Kerry A; Symonsbergen, David J; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E; Sikich, Sharmin M; Jackson, Abby

    2012-08-01

    DETECHIP(®) is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP(®) used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP(®). Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods.

  9. Unsupervised analysis of small animal dynamic Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Spinelli, Antonello E.; Boschi, Federico

    2011-12-01

    Clustering analysis (CA) and principal component analysis (PCA) were applied to dynamic Cerenkov luminescence images (dCLI). In order to investigate the performances of the proposed approaches, two distinct dynamic data sets obtained by injecting mice with 32P-ATP and 18F-FDG were acquired using the IVIS 200 optical imager. The k-means clustering algorithm has been applied to dCLI and was implemented using interactive data language 8.1. We show that cluster analysis allows us to obtain good agreement between the clustered and the corresponding emission regions like the bladder, the liver, and the tumor. We also show a good correspondence between the time activity curves of the different regions obtained by using CA and manual region of interest analysis on dCLIT and PCA images. We conclude that CA provides an automatic unsupervised method for the analysis of preclinical dynamic Cerenkov luminescence image data.

  10. Unsupervised analysis of small animal dynamic Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Spinelli, Antonello E.; Boschi, Federico

    2011-12-01

    Clustering analysis (CA) and principal component analysis (PCA) were applied to dynamic Cerenkov luminescence images (dCLI). In order to investigate the performances of the proposed approaches, two distinct dynamic data sets obtained by injecting mice with 32P-ATP and 18F-FDG were acquired using the IVIS 200 optical imager. The k-means clustering algorithm has been applied to dCLI and was implemented using interactive data language 8.1. We show that cluster analysis allows us to obtain good agreement between the clustered and the corresponding emission regions like the bladder, the liver, and the tumor. We also show a good correspondence between the time activity curves of the different regions obtained by using CA and manual region of interest analysis on dCLIT and PCA images. We conclude that CA provides an automatic unsupervised method for the analysis of preclinical dynamic Cerenkov luminescence image data.

  11. Uncooled LWIR imaging: applications and market analysis

    NASA Astrophysics Data System (ADS)

    Takasawa, Satomi

    2015-05-01

    The evolution of infrared (IR) imaging sensor technology for defense market has played an important role in developing commercial market, as dual use of the technology has expanded. In particular, technologies of both reduction in pixel pitch and vacuum package have drastically evolved in the area of uncooled Long-Wave IR (LWIR; 8-14 μm wavelength region) imaging sensor, increasing opportunity to create new applications. From the macroscopic point of view, the uncooled LWIR imaging market is divided into two areas. One is a high-end market where uncooled LWIR imaging sensor with sensitivity as close to that of cooled one as possible is required, while the other is a low-end market which is promoted by miniaturization and reduction in price. Especially, in the latter case, approaches towards consumer market have recently appeared, such as applications of uncooled LWIR imaging sensors to night visions for automobiles and smart phones. The appearance of such a kind of commodity surely changes existing business models. Further technological innovation is necessary for creating consumer market, and there will be a room for other companies treating components and materials such as lens materials and getter materials and so on to enter into the consumer market.

  12. Tilted planes in 3D image analysis

    NASA Astrophysics Data System (ADS)

    Pargas, Roy P.; Staples, Nancy J.; Malloy, Brian F.; Cantrell, Ken; Chhatriwala, Murtuza

    1998-03-01

    Reliable 3D wholebody scanners which output digitized 3D images of a complete human body are now commercially available. This paper describes a software package, called 3DM, being developed by researchers at Clemson University and which manipulates and extracts measurements from such images. The focus of this paper is on tilted planes, a 3DM tool which allows a user to define a plane through a scanned image, tilt it in any direction, and effectively define three disjoint regions on the image: the points on the plane and the points on either side of the plane. With tilted planes, the user can accurately take measurements required in applications such as apparel manufacturing. The user can manually segment the body rather precisely. Tilted planes assist the user in analyzing the form of the body and classifying the body in terms of body shape. Finally, titled planes allow the user to eliminate extraneous and unwanted points often generated by a 3D scanner. This paper describes the user interface for tilted planes, the equations defining the plane as the user moves it through the scanned image, an overview of the algorithms, and the interaction of the tilted plane feature with other tools in 3DM.

  13. Geopositioning Precision Analysis of Multiple Image Triangulation Using Lro Nac Lunar Images

    NASA Astrophysics Data System (ADS)

    Di, K.; Xu, B.; Liu, B.; Jia, M.; Liu, Z.

    2016-06-01

    This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images at the Chang'e-3(CE-3) landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs) of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  14. Image Sharing Technologies and Reduction of Imaging Utilization: A Systematic Review and Meta-analysis.

    PubMed

    Vest, Joshua R; Jung, Hye-Young; Ostrovsky, Aaron; Das, Lala Tanmoy; McGinty, Geraldine B

    2015-12-01

    Image sharing technologies may reduce unneeded imaging by improving provider access to imaging information. A systematic review and meta-analysis were conducted to summarize the impact of image sharing technologies on patient imaging utilization. Quantitative evaluations of the effects of PACS, regional image exchange networks, interoperable electronic heath records, tools for importing physical media, and health information exchange systems on utilization were identified through a systematic review of the published and gray English-language literature (2004-2014). Outcomes, standard effect sizes (ESs), settings, technology, populations, and risk of bias were abstracted from each study. The impact of image sharing technologies was summarized with random-effects meta-analysis and meta-regression models. A total of 17 articles were included in the review, with a total of 42 different studies. Image sharing technology was associated with a significant decrease in repeat imaging (pooled effect size [ES] = -0.17; 95% confidence interval [CI] = [-0.25, -0.09]; P < .001). However, image sharing technology was associated with a significant increase in any imaging utilization (pooled ES = 0.20; 95% CI = [0.07, 0.32]; P = .002). For all outcomes combined, image sharing technology was not associated with utilization. Most studies were at risk for bias. Image sharing technology was associated with reductions in repeat and unnecessary imaging, in both the overall literature and the most-rigorous studies. Stronger evidence is needed to further explore the role of specific technologies and their potential impact on various modalities, patient populations, and settings. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  15. Image Sharing Technologies and Reduction of Imaging Utilization: A Systematic Review and Meta-analysis

    PubMed Central

    Vest, Joshua R.; Jung, Hye-Young; Ostrovsky, Aaron; Das, Lala Tanmoy; McGinty, Geraldine B.

    2016-01-01

    Introduction Image sharing technologies may reduce unneeded imaging by improving provider access to imaging information. A systematic review and meta-analysis were conducted to summarize the impact of image sharing technologies on patient imaging utilization. Methods Quantitative evaluations of the effects of PACS, regional image exchange networks, interoperable electronic heath records, tools for importing physical media, and health information exchange systems on utilization were identified through a systematic review of the published and gray English-language literature (2004–2014). Outcomes, standard effect sizes (ESs), settings, technology, populations, and risk of bias were abstracted from each study. The impact of image sharing technologies was summarized with random-effects meta-analysis and meta-regression models. Results A total of 17 articles were included in the review, with a total of 42 different studies. Image sharing technology was associated with a significant decrease in repeat imaging (pooled effect size [ES] = −0.17; 95% confidence interval [CI] = [−0.25, −0.09]; P < .001). However, image sharing technology was associated with a significant increase in any imaging utilization (pooled ES = 0.20; 95% CI = [0.07, 0.32]; P = .002). For all outcomes combined, image sharing technology was not associated with utilization. Most studies were at risk for bias. Conclusions Image sharing technology was associated with reductions in repeat and unnecessary imaging, in both the overall literature and the most-rigorous studies. Stronger evidence is needed to further explore the role of specific technologies and their potential impact on various modalities, patient populations, and settings. PMID:26614882

  16. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  17. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  18. Fractal analysis for reduced reference image quality assessment.

    PubMed

    Xu, Yong; Liu, Delei; Quan, Yuhui; Le Callet, Patrick

    2015-07-01

    In this paper, multifractal analysis is adapted to reduced-reference image quality assessment (RR-IQA). A novel RR-QA approach is proposed, which measures the difference of spatial arrangement between the reference image and the distorted image in terms of spatial regularity measured by fractal dimension. An image is first expressed in Log-Gabor domain. Then, fractal dimensions are computed on each Log-Gabor subband and concatenated as a feature vector. Finally, the extracted features are pooled as the quality score of the distorted image using l1 distance. Compared with existing approaches, the proposed method measures image quality from the perspective of the spatial distribution of image patterns. The proposed method was evaluated on seven public benchmark data sets. Experimental results have demonstrated the excellent performance of the proposed method in comparison with state-of-the-art approaches.

  19. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  20. Anima: modular workflow system for comprehensive image data analysis.

    PubMed

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima.

  1. Analysis of filtering techniques and image quality in pixel duplicated images

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2009-08-01

    When images undergo filtering operations, valuable information can be lost besides the intended noise or frequencies due to averaging of neighboring pixels. When the image is enlarged by duplicating pixels, such filtering effects can be reduced and more information retained, which could be critical when analyzing image content automatically. Analysis of retinal images could reveal many diseases at early stage as long as minor changes that depart from a normal retinal scan can be identified and enhanced. In this paper, typical filtering techniques are applied to an early stage diabetic retinopathy image which has undergone digital pixel duplication. The same techniques are applied to the original images for comparison. The effects of filtering are then demonstrated for both pixel duplicated and original images to show the information retention capability of pixel duplication. Image quality is computed based on published metrics. Our analysis shows that pixel duplication is effective in retaining information on smoothing operations such as mean filtering in the spatial domain, as well as lowpass and highpass filtering in the frequency domain, based on the filter window size. Blocking effects due to image compression and pixel duplication become apparent in frequency analysis.

  2. Basic research planning in mathematical pattern recognition and image analysis

    NASA Technical Reports Server (NTRS)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  3. Independent component analysis based filtering for penumbral imaging

    SciTech Connect

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-10-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters.

  4. Four dimensional reconstruction and analysis of plume images

    NASA Astrophysics Data System (ADS)

    Dhawan, Atam P.; Disimile, Peter J.; Peck, Charles, III

    Results of a time-history based three-dimensional reconstruction of cross-sectional images corresponding to a specific planar location of the jet structure are reported. The experimental set-up is described, and three-dimensional displays of time-history based reconstruction of the jet structure are presented. Future developments in image analysis, quantification and interpretation, and flow visualization of rocket engine plume images are expected to provide a tool for correlating engine diagnostic features with visible flow structures.

  5. An Analysis of the Magneto-Optic Imaging System

    NASA Technical Reports Server (NTRS)

    Nath, Shridhar

    1996-01-01

    The Magneto-Optic Imaging system is being used for the detection of defects in airframes and other aircraft structures. The system has been successfully applied to detecting surface cracks, but has difficulty in the detection of sub-surface defects such as corrosion. The intent of the grant was to understand the physics of the MOI better, in order to use it effectively for detecting corrosion and for classifying surface defects. Finite element analysis, image classification, and image processing are addressed.

  6. Terahertz grayscale imaging using spatial frequency domain analysis

    NASA Astrophysics Data System (ADS)

    Lv, Zhihui; Sun, Lin; Zhang, Dongwen; Yuan, Jianmin

    2011-11-01

    We reported a technology of gray-scale imaging using broadband terahertz pulse. Utilizing the spatial distribution of different frequency content, image information can be acquired from the terahertz frequency domain analysis. Unlike CCDs(charge-coupled devices) or spot scanning technology are used in conversional method, a single-pixels detector with single measurement can meet the demand of our scheme. And high SNR terahertz imaging with fast velocity is believed.

  7. Terahertz grayscale imaging using spatial frequency domain analysis

    NASA Astrophysics Data System (ADS)

    Lv, Zhihui; Sun, Lin; Zhang, Dongwen; Yuan, Jianmin

    2012-03-01

    We reported a technology of gray-scale imaging using broadband terahertz pulse. Utilizing the spatial distribution of different frequency content, image information can be acquired from the terahertz frequency domain analysis. Unlike CCDs(charge-coupled devices) or spot scanning technology are used in conversional method, a single-pixels detector with single measurement can meet the demand of our scheme. And high SNR terahertz imaging with fast velocity is believed.

  8. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Computer Vision-Based Image Analysis of Bacteria.

    PubMed

    Danielsen, Jonas; Nordenfelt, Pontus

    2017-01-01

    Microscopy is an essential tool for studying bacteria, but is today mostly used in a qualitative or possibly semi-quantitative manner often involving time-consuming manual analysis. It also makes it difficult to assess the importance of individual bacterial phenotypes, especially when there are only subtle differences in features such as shape, size, or signal intensity, which is typically very difficult for the human eye to discern. With computer vision-based image analysis - where computer algorithms interpret image data - it is possible to achieve an objective and reproducible quantification of images in an automated fashion. Besides being a much more efficient and consistent way to analyze images, this can also reveal important information that was previously hard to extract with traditional methods. Here, we present basic concepts of automated image processing, segmentation and analysis that can be relatively easy implemented for use with bacterial research.

  10. "Multimodal Contrast" from the Multivariate Analysis of Hyperspectral CARS Images

    NASA Astrophysics Data System (ADS)

    Tabarangao, Joel T.

    The typical contrast mechanism employed in multimodal CARS microscopy involves the use of other nonlinear imaging modalities such as two-photon excitation fluorescence (TPEF) microscopy and second harmonic generation (SHG) microscopy to produce a molecule-specific pseudocolor image. In this work, I explore the use of unsupervised multivariate statistical analysis tools such as Principal Component Analysis (PCA) and Vertex Component Analysis (VCA) to provide better contrast using the hyperspectral CARS data alone. Using simulated CARS images, I investigate the effects of the quadratic dependence of CARS signal on concentration on the pixel clustering and classification and I find that a normalization step is necessary to improve pixel color assignment. Using an atherosclerotic rabbit aorta test image, I show that the VCA algorithm provides pseudocolor contrast that is comparable to multimodal imaging, thus showing that much of the information gleaned from a multimodal approach can be sufficiently extracted from the CARS hyperspectral stack itself.

  11. Optical image acquisition system for colony analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Jin, Wenbiao

    2006-02-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems since they belong to a new technology product. One of the main problems is image acquisition. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. A digital camera in the top of the box connected to a PC computer with a USB cable, all the camera functions are controlled by the computer.

  12. System Matrix Analysis for Computed Tomography Imaging.

    PubMed

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data.

  13. Citrus fruit recognition using color image analysis

    NASA Astrophysics Data System (ADS)

    Xu, Huirong; Ying, Yibin

    2004-10-01

    An algorithm for the automatic recognition of citrus fruit on the tree was developed. Citrus fruits have different color with leaves and branches portions. Fifty-three color images with natural citrus-grove scenes were digitized and analyzed for red, green, and blue (RGB) color content. The color characteristics of target surfaces (fruits, leaves, or branches) were extracted using the range of interest (ROI) tool. Several types of contrast color indices were designed and tested. In this study, the fruit image was enhanced using the (R-B) contrast color index because results show that the fruit have the highest color difference among the objects in the image. A dynamic threshold function was derived from this color model and used to distinguish citrus fruit from background. The results show that the algorithm worked well under frontlighting or backlighting condition. However, there are misclassifications when the fruit or the background is under a brighter sunlight.

  14. System Matrix Analysis for Computed Tomography Imaging

    PubMed Central

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  15. Spectral identity mapping for enhanced chemical image analysis

    NASA Astrophysics Data System (ADS)

    Turner, John F., II

    2005-03-01

    Advances in spectral imaging instrumentation during the last two decades has lead to higher image fidelity, tighter spatial resolution, narrower spectral resolution, and improved signal to noise ratios. An important sub-classification of spectral imaging is chemical imaging, in which the sought-after information from the sample is its chemical composition. Consequently, chemical imaging can be thought of as a two-step process, spectral image acquisition and the subsequent processing of the spectral image data to generate chemically relevant image contrast. While chemical imaging systems that provide turnkey data acquisition are increasingly widespread, better strategies to analyze the vast datasets they produce are needed. The Generation of chemically relevant image contrast from spectral image data requires multivariate processing algorithms that can categorize spectra according to shape. Conventional chemometric techniques like inverse least squares, classical least squares, multiple linear regression, principle component regression, and multivariate curve resolution are effective for predicting the chemical composition of samples having known constituents, but are less effective when a priori information about the sample is unavailable. To address these problems, we have developed a fully automated non-parametric technique called spectral identity mapping (SIMS) that reduces the dependence of spectral image analysis on training datasets. The qualitative SIMS method provides enhanced spectral shape specificity and improved chemical image contrast. We present SIMS results of infrared spectral image data acquired from polymer coated paper substrates used in the manufacture of pressure sensitive adhesive tapes. In addition, we compare the SIMS results to results from spectral angle mapping (SAM) and cosine correlation analysis (CCA), two closely related techniques.

  16. Texture analysis: a review of neurologic MR imaging applications.

    PubMed

    Kassner, A; Thornhill, R E

    2010-05-01

    Texture analysis describes a variety of image-analysis techniques that quantify the variation in surface intensity or patterns, including some that are imperceptible to the human visual system. Texture analysis may be particularly well-suited for lesion segmentation and characterization and for the longitudinal monitoring of disease or recovery. We begin this review by outlining the general procedure for performing texture analysis, identifying some potential pitfalls and strategies for avoiding them. We then provide an overview of some intriguing neuro-MR imaging applications of texture analysis, particularly in the characterization of brain tumors, prediction of seizures in epilepsy, and a host of applications to MS.

  17. Analysis of PETT images in psychiatric disorders

    SciTech Connect

    Brodie, J.D.; Gomez-Mont, F.; Volkow, N.D.; Corona, J.F.; Wolf, A.P.; Wolkin, A.; Russell, J.A.G.; Christman, D.; Jaeger, J.

    1983-01-01

    A quantitative method is presented for studying the pattern of metabolic activity in a set of Positron Emission Transaxial Tomography (PETT) images. Using complex Fourier coefficients as a feature vector for each image, cluster, principal components, and discriminant function analyses are used to empirically describe metabolic differences between control subjects and patients with DSM III diagnosis for schizophrenia or endogenous depression. We also present data on the effects of neuroleptic treatment on the local cerebral metabolic rate of glucose utilization (LCMRGI) in a group of chronic schizophrenics using the region of interest approach. 15 references, 4 figures, 3 tables.

  18. SLAR image interpretation keys for geographic analysis

    NASA Technical Reports Server (NTRS)

    Coiner, J. C.

    1972-01-01

    A means for side-looking airborne radar (SLAR) imagery to become a more widely used data source in geoscience and agriculture is suggested by providing interpretation keys as an easily implemented interpretation model. Interpretation problems faced by the researcher wishing to employ SLAR are specifically described, and the use of various types of image interpretation keys to overcome these problems is suggested. With examples drawn from agriculture and vegetation mapping, direct and associate dichotomous image interpretation keys are discussed and methods of constructing keys are outlined. Initial testing of the keys, key-based automated decision rules, and the role of the keys in an information system for agriculture are developed.

  19. Bridging the Semantic Gap Between Diagnostic Histopathology and Image Analysis.

    PubMed

    Traore, Lamine; Kergosien, Yannick; Racoceanu, Daniel

    2017-01-01

    With the wider acceptance of Whole Slide Images (WSI) in histopathology domain, automatic image analysis algorithms represent a very promising solution to support pathologist's laborious tasks during the diagnosis process, to create a quantification-based second opinion and to enhance inter-observer agreement. In this context, reference vocabularies and formalization of the associated knowledge are especially needed to annotate histopathology images with labels complying with semantic standards. In this work, we elaborate a sustainable triptych able to bridge the gap between pathologists and image analysis scientists. The proposed paradigm is structured along three components: i) extracting a relevant semantic repository from the College of American Pathologists (CAP) organ-specific Cancer Checklists and associated Protocols (CC&P); ii) identifying imaging formalized knowledge issued from effective histopathology imaging methods highlighted by recent Digital Pathology (DP) contests and iii) proposing a formal representation of the imaging concepts and functionalities issued from major biomedical imaging software (MATLAB, ITK, ImageJ). Since the first step i) has been the object of a recent publication of our team, this study focuses on the steps ii) and iii). Our hypothesis is that the management of available semantic resources concerning the histopathology imaging tasks associated with effective methods highlighted by the recent DP challenges will facilitate the integration of WSI in clinical routine and support new generation of DP protocols.

  20. Disability in Physical Education Textbooks: An Analysis of Image Content

    ERIC Educational Resources Information Center

    Taboas-Pais, Maria Ines; Rey-Cao, Ana

    2012-01-01

    The aim of this paper is to show how images of disability are portrayed in physical education textbooks for secondary schools in Spain. The sample was composed of 3,316 images published in 36 textbooks by 10 publishing houses. A content analysis was carried out using a coding scheme based on categories employed in other similar studies and adapted…

  1. Four challenges in medical image analysis from an industrial perspective.

    PubMed

    Weese, Jürgen; Lorenz, Cristian

    2016-10-01

    Today's medical imaging systems produce a huge amount of images containing a wealth of information. However, the information is hidden in the data and image analysis algorithms are needed to extract it, to make it readily available for medical decisions and to enable an efficient work flow. Advances in medical image analysis over the past 20 years mean there are now many algorithms and ideas available that allow to address medical image analysis tasks in commercial solutions with sufficient performance in terms of accuracy, reliability and speed. At the same time new challenges have arisen. Firstly, there is a need for more generic image analysis technologies that can be efficiently adapted for a specific clinical task. Secondly, efficient approaches for ground truth generation are needed to match the increasing demands regarding validation and machine learning. Thirdly, algorithms for analyzing heterogeneous image data are needed. Finally, anatomical and organ models play a crucial role in many applications, and algorithms to construct patient-specific models from medical images with a minimum of user interaction are needed. These challenges are complementary to the on-going need for more accurate, more reliable and faster algorithms, and dedicated algorithmic solutions for specific applications. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Ringed impact craters on Venus: An analysis from Magellan images

    NASA Technical Reports Server (NTRS)

    Alexopoulos, Jim S.; Mckinnon, William B.

    1992-01-01

    We have analyzed cycle 1 Magellan images covering approximately 90 percent of the venusian surface and have identified 55 unequivocal peak-ring craters and multiringed impact basins. This comprehensive study (52 peak-ring craters and at least 3 multiringed impact basins) complements our earlier independent analysis of Arecibo and Venera images and initial Magellan data and that of the Magellan team.

  3. Higher Education Institution Image: A Correspondence Analysis Approach.

    ERIC Educational Resources Information Center

    Ivy, Jonathan

    2001-01-01

    Investigated how marketing is used to convey higher education institution type image in the United Kingdom and South Africa. Using correspondence analysis, revealed the unique positionings created by old and new universities and technikons in these countries. Also identified which marketing tools they use in conveying their image. (EV)

  4. An Online Image Analysis Tool for Science Education

    ERIC Educational Resources Information Center

    Raeside, L.; Busschots, B.; Waddington, S.; Keating, J. G.

    2008-01-01

    This paper describes an online image analysis tool developed as part of an iterative, user-centered development of an online Virtual Learning Environment (VLE) called the Education through Virtual Experience (EVE) Portal. The VLE provides a Web portal through which schoolchildren and their teachers create scientific proposals, retrieve images and…

  5. Disability in Physical Education Textbooks: An Analysis of Image Content

    ERIC Educational Resources Information Center

    Taboas-Pais, Maria Ines; Rey-Cao, Ana

    2012-01-01

    The aim of this paper is to show how images of disability are portrayed in physical education textbooks for secondary schools in Spain. The sample was composed of 3,316 images published in 36 textbooks by 10 publishing houses. A content analysis was carried out using a coding scheme based on categories employed in other similar studies and adapted…

  6. Higher Education Institution Image: A Correspondence Analysis Approach.

    ERIC Educational Resources Information Center

    Ivy, Jonathan

    2001-01-01

    Investigated how marketing is used to convey higher education institution type image in the United Kingdom and South Africa. Using correspondence analysis, revealed the unique positionings created by old and new universities and technikons in these countries. Also identified which marketing tools they use in conveying their image. (EV)

  7. An Online Image Analysis Tool for Science Education

    ERIC Educational Resources Information Center

    Raeside, L.; Busschots, B.; Waddington, S.; Keating, J. G.

    2008-01-01

    This paper describes an online image analysis tool developed as part of an iterative, user-centered development of an online Virtual Learning Environment (VLE) called the Education through Virtual Experience (EVE) Portal. The VLE provides a Web portal through which schoolchildren and their teachers create scientific proposals, retrieve images and…

  8. VIDA: an environment for multidimensional image display and analysis

    NASA Astrophysics Data System (ADS)

    Hoffman, Eric A.; Gnanaprakasam, Daniel; Gupta, Krishanu B.; Hoford, John D.; Kugelmass, Steven D.; Kulawiec, Richard S.

    1992-06-01

    Since the first dynamic volumetric studies were done in the early 1980s on the dynamic spatial reconstructor (DSR), there has been a surge of interest in volumetric and dynamic imaging using a number of tomographic techniques. Knowledge gained in handling DSR image data has readily transferred to the current use of a number of other volumetric and dynamic imaging modalities including cine and spiral CT, MR, and PET. This in turn has lead to our development of a new image display and quantitation package which we have named VIDATM (volumetric image display and analysis). VIDA is written in C, runs under the UNIXTM operating system, and uses the XView toolkit to conform to the Open LookTM graphical user interface specification. A shared memory structure has been designed which allows for the manipulation of multiple volumes simultaneously. VIDA utilizes a windowing environment and allows execution of multiple processes simultaneously. Available programs include: oblique sectioning, volume rendering, region of interest analysis, interactive image segmentation/editing, algebraic image manipulation, conventional cardiac mechanics analysis, homogeneous strain analysis, tissue blood flow evaluation, etc. VIDA is a built modularly, allowing new programs to be developed and integrated easily. An emphasis has been placed upon image quantitation for the purpose of physiological evaluation.

  9. On the applicability of numerical image mapping for PIV image analysis near curved interfaces

    NASA Astrophysics Data System (ADS)

    Masullo, Alessandro; Theunissen, Raf

    2017-07-01

    This paper scrutinises the general suitability of image mapping for particle image velocimetry (PIV) applications. Image mapping can improve PIV measurement accuracy by eliminating overlap between the PIV interrogation windows and an interface, as illustrated by some examples in the literature. Image mapping transforms the PIV images using a curvilinear interface-fitted mesh prior to performing the PIV cross correlation. However, degrading effects due to particle image deformation and the Jacobian transformation inherent in the mapping along curvilinear grid lines have never been deeply investigated. Here, the implementation of image mapping from mesh generation to image resampling is presented in detail, and related error sources are analysed. Systematic comparison with standard PIV approaches shows that image mapping is effective only in a very limited set of flow conditions and geometries, and depends strongly on a priori knowledge of the boundary shape and streamlines. In particular, with strongly curved geometries or streamlines that are not parallel to the interface, the image-mapping approach is easily outperformed by more traditional image analysis methodologies invoking suitable spatial relocation of the obtained displacement vector.

  10. Towards Building Computerized Image Analysis Framework for Nucleus Discrimination in Microscopy Images of Diffuse Glioma

    PubMed Central

    Kong, Jun; Cooper, Lee; Kurc, Tahsin; Brat, Daniel; Saltz, Joel

    2012-01-01

    As an effort to build an automated and objective system for pathologic image analysis, we present, in this paper, a computerized image processing method for identifying nuclei, a basic biological unit of diagnostic utility, in microscopy images of glioma tissue samples. The complete analysis includes multiple processing steps, involving mode detection with color and spatial information for pixel clustering, background normalization leveraging morphological operations, boundary refinement with deformable models, and clumped nuclei separation using watershed. In aggregate, our validation dataset includes 220 nuclei from 11 distinct tissue regions selected at random by an experienced neuropathologist. Computerized nuclei detection results are in good concordance with human markups by both visual appraisement and quantitative measures. We compare the performance of the proposed analysis algorithm with that of CellProfiler, a classical analysis software for cell image process, and present the superiority of our method to CellProfiler. PMID:22255853

  11. The ImageJ ecosystem: an open platform for biomedical image analysis

    PubMed Central

    Schindelin, Johannes; Rueden, Curtis T.; Hiner, Mark C.; Eliceiri, Kevin W.

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available – from commercial to academic, special-purpose to Swiss army knife, small to large–but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts life science, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. PMID:26153368

  12. Image analysis for denoising full-field frequency-domain fluorescence lifetime images.

    PubMed

    Spring, B Q; Clegg, R M

    2009-08-01

    Video-rate fluorescence lifetime-resolved imaging microscopy (FLIM) is a quantitative imaging technique for measuring dynamic processes in biological specimens. FLIM offers valuable information in addition to simple fluorescence intensity imaging; for instance, the fluorescence lifetime is sensitive to the microenvironment of the fluorophore allowing reliable differentiation between concentration differences and dynamic quenching. Homodyne FLIM is a full-field frequency-domain technique for imaging fluorescence lifetimes at every pixel of a fluorescence image simultaneously. If a single modulation frequency is used, video-rate image acquisition is possible. Homodyne FLIM uses a gain-modulated image intensified charge-coupled device (ICCD) detector, which unfortunately is a major contribution to the noise of the measurement. Here we introduce image analysis for denoising homodyne FLIM data. The denoising routine is fast, improves the extraction of the fluorescence lifetime value(s) and increases the sensitivity and fluorescence lifetime resolving power of the FLIM instrument. The spatial resolution (especially the high spatial frequencies not related to noise) of the FLIM image is preserved, because the denoising routine does not blur or smooth the image. By eliminating the random noise known to be specific to photon noise and from the intensifier amplification, the fidelity of the spatial resolution is improved. The polar plot projection, a rapid FLIM analysis method, is used to demonstrate the effectiveness of the denoising routine with exemplary data from both physical and complex biological samples. We also suggest broader impacts of the image analysis for other fluorescence microscopy techniques (e.g. super-resolution imaging).

  13. The ImageJ ecosystem: An open platform for biomedical image analysis.

    PubMed

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem.

  14. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1984-01-01

    Methods were developed for estimating point spread functions from image data. Roads and bridges in dark backgrounds are being examined as well as other smoothing methods for reducing noise in the estimated point spread function. Tomographic techniques were used to estimate two dimensional point spread functions. Reformatting software changes were implemented to handle formats for LANDSAT-5 data.

  15. Electron Microscopy and Image Analysis for Selected Materials

    NASA Technical Reports Server (NTRS)

    Williams, George

    1999-01-01

    This particular project was completed in collaboration with the metallurgical diagnostics facility. The objective of this research had four major components. First, we required training in the operation of the environmental scanning electron microscope (ESEM) for imaging of selected materials including biological specimens. The types of materials range from cyanobacteria and diatoms to cloth, metals, sand, composites and other materials. Second, to obtain training in surface elemental analysis technology using energy dispersive x-ray (EDX) analysis, and in the preparation of x-ray maps of these same materials. Third, to provide training for the staff of the metallurgical diagnostics and failure analysis team in the area of image processing and image analysis technology using NIH Image software. Finally, we were to assist in the sample preparation, observing, imaging, and elemental analysis for Mr. Richard Hoover, one of NASA MSFC's solar physicists and Marshall's principal scientist for the agency-wide virtual Astrobiology Institute. These materials have been collected from various places around the world including the Fox Tunnel in Alaska, Siberia, Antarctica, ice core samples from near Lake Vostoc, thermal vents in the ocean floor, hot springs and many others. We were successful in our efforts to obtain high quality, high resolution images of various materials including selected biological ones. Surface analyses (EDX) and x-ray maps were easily prepared with this technology. We also discovered and used some applications for NIH Image software in the metallurgical diagnostics facility.

  16. Electron Microscopy and Image Analysis for Selected Materials

    NASA Technical Reports Server (NTRS)

    Williams, George

    1999-01-01

    This particular project was completed in collaboration with the metallurgical diagnostics facility. The objective of this research had four major components. First, we required training in the operation of the environmental scanning electron microscope (ESEM) for imaging of selected materials including biological specimens. The types of materials range from cyanobacteria and diatoms to cloth, metals, sand, composites and other materials. Second, to obtain training in surface elemental analysis technology using energy dispersive x-ray (EDX) analysis, and in the preparation of x-ray maps of these same materials. Third, to provide training for the staff of the metallurgical diagnostics and failure analysis team in the area of image processing and image analysis technology using NIH Image software. Finally, we were to assist in the sample preparation, observing, imaging, and elemental analysis for Mr. Richard Hoover, one of NASA MSFC's solar physicists and Marshall's principal scientist for the agency-wide virtual Astrobiology Institute. These materials have been collected from various places around the world including the Fox Tunnel in Alaska, Siberia, Antarctica, ice core samples from near Lake Vostoc, thermal vents in the ocean floor, hot springs and many others. We were successful in our efforts to obtain high quality, high resolution images of various materials including selected biological ones. Surface analyses (EDX) and x-ray maps were easily prepared with this technology. We also discovered and used some applications for NIH Image software in the metallurgical diagnostics facility.

  17. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  18. Image analysis of dye stained patterns in soils

    NASA Astrophysics Data System (ADS)

    Bogner, Christina; Trancón y Widemann, Baltasar; Lange, Holger

    2013-04-01

    Quality of surface water and groundwater is directly affected by flow processes in the unsaturated zone. In general, it is difficult to measure or model water flow. Indeed, parametrization of hydrological models is problematic and often no unique solution exists. To visualise flow patterns in soils directly dye tracer studies can be done. These experiments provide images of stained soil profiles and their evaluation demands knowledge in hydrology as well as in image analysis and statistics. First, these photographs are converted to binary images classifying the pixels in dye stained and non-stained ones. Then, some feature extraction is necessary to discern relevant hydrological information. In our study we propose to use several index functions to extract different (ideally complementary) features. We associate each image row with a feature vector (i.e. a certain number of image function values) and use these features to cluster the image rows to identify similar image areas. Because images of stained profiles might have different reasonable clusterings, we calculate multiple consensus clusterings. An expert can explore these different solutions and base his/her interpretation of predominant flow mechanisms on quantitative (objective) criteria. The complete workflow from reading-in binary images to final clusterings has been implemented in the free R system, a language and environment for statistical computing. The calculation of image indices is part of our own package Indigo, manipulation of binary images, clustering and visualization of results are done using either build-in facilities in R, additional R packages or the LATEX system.

  19. Simulation of radiographic images for quality and dose analysis

    NASA Astrophysics Data System (ADS)

    Winslow, Mark P.

    A software package, Virtual Photographic Radiographic Imaging Simulator (ViPRIS), has been developed for optimizing x-ray radiographic imaging. A tomographic phantom, VIP-Man, constructed from Visible Human anatomical color images is used to simulate the scattered portion of an x-ray system and to compute organ doses using the ESGnrc Monte Carlo code. The primary portion of an x-ray image is simulated using the projection ray-tracing method through the Visible Human CT data set. To produce a realistic image, the software simulates quantum noise, blurring effects, lesions, detector absorption efficiency, and other imaging artifacts. The primary and scattered portions of an x-ray chest image are combined to form a final image for observer studies using computerized simulated observers. Absorbed doses in organs and tissues of the segmented VIP-Man phantom were also obtained from the Monte Carlo simulations to derive effective dose, which is a radiation risk indicator. Approximately 2000 simulated images and 200,000 vectorized image data files were analyzed using ROC/AUC analysis. Results demonstrated the usefulness of this approach and the software for studying x-ray image qualify and radiation dose.

  20. A performance analysis system for MEMS using automated imaging methods

    SciTech Connect

    LaVigne, G.F.; Miller, S.L.

    1998-08-01

    The ability to make in-situ performance measurements of MEMS operating at high speeds has been demonstrated using a new image analysis system. Significant improvements in performance and reliability have directly resulted from the use of this system.

  1. Image Analysis in Plant Sciences: Publish Then Perish.

    PubMed

    Lobet, Guillaume

    2017-07-01

    Image analysis has become a powerful technique for most plant scientists. In recent years dozens of image analysis tools have been published in plant science journals. These tools cover the full spectrum of plant scales, from single cells to organs and canopies. However, the field of plant image analysis remains in its infancy. It still has to overcome important challenges, such as the lack of robust validation practices or the absence of long-term support. In this Opinion article, I: (i) present the current state of the field, based on data from the plant-image-analysis.org database; (ii) identify the challenges faced by its community; and (iii) propose workable ways of improvement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Spatially Weighted Principal Component Analysis for Imaging Classification

    PubMed Central

    Guo, Ruixin; Ahn, Mihye; Zhu, Hongtu

    2014-01-01

    The aim of this paper is to develop a supervised dimension reduction framework, called Spatially Weighted Principal Component Analysis (SWPCA), for high dimensional imaging classification. Two main challenges in imaging classification are the high dimensionality of the feature space and the complex spatial structure of imaging data. In SWPCA, we introduce two sets of novel weights including global and local spatial weights, which enable a selective treatment of individual features and incorporation of the spatial structure of imaging data and class label information. We develop an e cient two-stage iterative SWPCA algorithm and its penalized version along with the associated weight determination. We use both simulation studies and real data analysis to evaluate the finite-sample performance of our SWPCA. The results show that SWPCA outperforms several competing principal component analysis (PCA) methods, such as supervised PCA (SPCA), and other competing methods, such as sparse discriminant analysis (SDA). PMID:26089629

  3. Memory-Augmented Cellular Automata for Image Analysis.

    DTIC Science & Technology

    1978-11-01

    case in which each cell has memory size proportional to the logarithm of the input size, showing the increased capabilities of these machines for executing a variety of basic image analysis and recognition tasks. (Author)

  4. Analysis of Multipath Pixels in SAR Images

    NASA Astrophysics Data System (ADS)

    Zhao, J. W.; Wu, J. C.; Ding, X. L.; Zhang, L.; Hu, F. M.

    2016-06-01

    As the received radar signal is the sum of signal contributions overlaid in one single pixel regardless of the travel path, the multipath effect should be seriously tackled as the multiple bounce returns are added to direct scatter echoes which leads to ghost scatters. Most of the existing solution towards the multipath is to recover the signal propagation path. To facilitate the signal propagation simulation process, plenty of aspects such as sensor parameters, the geometry of the objects (shape, location, orientation, mutual position between adjacent buildings) and the physical parameters of the surface (roughness, correlation length, permittivity)which determine the strength of radar signal backscattered to the SAR sensor should be given in previous. However, it's not practical to obtain the highly detailed object model in unfamiliar area by field survey as it's a laborious work and time-consuming. In this paper, SAR imaging simulation based on RaySAR is conducted at first aiming at basic understanding of multipath effects and for further comparison. Besides of the pre-imaging simulation, the product of the after-imaging, which refers to radar images is also taken into consideration. Both Cosmo-SkyMed ascending and descending SAR images of Lupu Bridge in Shanghai are used for the experiment. As a result, the reflectivity map and signal distribution map of different bounce level are simulated and validated by 3D real model. The statistic indexes such as the phase stability, mean amplitude, amplitude dispersion, coherence and mean-sigma ratio in case of layover are analyzed with combination of the RaySAR output.

  5. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  6. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  7. Alternative Theories of Inference in Expert Systems for Image Analysis.

    DTIC Science & Technology

    1985-02-01

    D-A153 649 ALTERNATIVE THEORIES OF INFERENCE IN EXPERT SYSTEMS FR 12 IMAGE ANALYSIS (U) DECISION SCIENCE CONSORTIUM INC FALLS CHURCH VAl M S COHEN ET...TEST CHART NATIONAL BUREAU OF StANDARDS-1963 A .- ., mETLN..b ? (0 Alentv thoreso inference in expert systems * for image analysis Marvin Cohen Decision...distribution unlimited 4. P[RFCR%0.NG ORCANIZATION REPORT NUMBERIS) s. MONITORING ORGANIZATION REPORT h.UMdiRISI 6. NAMAE 0F PERFORMING ORGANIZATION

  8. Non-Imaging Software/Data Analysis Requirements

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The analysis software needs of the non-imaging planetary data user are discussed. Assumptions as to the nature of the planetary science data centers where the data are physically stored are advanced, the scope of the non-imaging data is outlined, and facilities that users are likely to need to define and access data are identified. Data manipulation and analysis needs and display graphics are discussed.

  9. Independent component analysis applications on THz sensing and imaging

    NASA Astrophysics Data System (ADS)

    Balci, Soner; Maleski, Alexander; Nascimento, Matheus Mello; Philip, Elizabath; Kim, Ju-Hyung; Kung, Patrick; Kim, Seongsin M.

    2016-05-01

    We report Independent Component Analysis (ICA) technique applied to THz spectroscopy and imaging to achieve a blind source separation. A reference water vapor absorption spectrum was extracted via ICA, then ICA was utilized on a THz spectroscopic image in order to clean the absorption of water molecules from each pixel. For this purpose, silica gel was chosen as the material of interest for its strong water absorption. The resulting image clearly showed that ICA effectively removed the water content in the detected signal allowing us to image the silica gel beads distinctively even though it was totally embedded in water before ICA was applied.

  10. Fiji - an Open Source platform for biological image analysis

    PubMed Central

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2013-01-01

    Fiji is a distribution of the popular Open Source software ImageJ focused on biological image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image processing algorithms. Fiji facilitates the transformation of novel algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities. PMID:22743772

  11. Pathology imaging informatics for quantitative analysis of whole-slide images

    PubMed Central

    Kothari, Sonal; Phan, John H; Stokes, Todd H; Wang, May D

    2013-01-01

    Objectives With the objective of bringing clinical decision support systems to reality, this article reviews histopathological whole-slide imaging informatics methods, associated challenges, and future research opportunities. Target audience This review targets pathologists and informaticians who have a limited understanding of the key aspects of whole-slide image (WSI) analysis and/or a limited knowledge of state-of-the-art technologies and analysis methods. Scope First, we discuss the importance of imaging informatics in pathology and highlight the challenges posed by histopathological WSI. Next, we provide a thorough review of current methods for: quality control of histopathological images; feature extraction that captures image properties at the pixel, object, and semantic levels; predictive modeling that utilizes image features for diagnostic or prognostic applications; and data and information visualization that explores WSI for de novo discovery. In addition, we highlight future research directions and discuss the impact of large public repositories of histopathological data, such as the Cancer Genome Atlas, on the field of pathology informatics. Following the review, we present a case study to illustrate a clinical decision support system that begins with quality control and ends with predictive modeling for several cancer endpoints. Currently, state-of-the-art software tools only provide limited image processing capabilities instead of complete data analysis for clinical decision-making. We aim to inspire researchers to conduct more research in pathology imaging informatics so that clinical decision support can become a reality. PMID:23959844

  12. Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images

    PubMed Central

    Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz

    2016-01-01

    The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692

  13. A collaborative biomedical image mining framework: application on the image analysis of microscopic kidney biopsies.

    PubMed

    Goudas, T; Doukas, C; Chatziioannou, A; Maglogiannis, I

    2013-01-01

    The analysis and characterization of biomedical image data is a complex procedure involving several processing phases, like data acquisition, preprocessing, segmentation, feature extraction and classification. The proper combination and parameterization of the utilized methods are heavily relying on the given image data set and experiment type. They may thus necessitate advanced image processing and classification knowledge and skills from the side of the biomedical expert. In this work, an application, exploiting web services and applying ontological modeling, is presented, to enable the intelligent creation of image mining workflows. The described tool can be directly integrated to the RapidMiner, Taverna or similar workflow management platforms. A case study dealing with the creation of a sample workflow for the analysis of kidney biopsy microscopy images is presented to demonstrate the functionality of the proposed framework.

  14. Parameter-Based Performance Analysis of Object-Based Image Analysis Using Aerial and Quikbird-2 Images

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz, M.

    2014-09-01

    Opening new possibilities for research, very high resolution (VHR) imagery acquired by recent commercial satellites and aerial systems requires advanced approaches and techniques that can handle large volume of data with high local variance. Delineation of land use/cover information from VHR images is a hot research topic in remote sensing. In recent years, object-based image analysis (OBIA) has become a popular solution for image analysis tasks as it considers shape, texture and content information associated with the image objects. The most important stage of OBIA is the image segmentation process applied prior to classification. Determination of optimal segmentation parameters is of crucial importance for the performance of the selected classifier. In this study, effectiveness and applicability of the segmentation method in relation to its parameters was analysed using two VHR images, an aerial photo and a Quickbird-2 image. Multi-resolution segmentation technique was employed with its optimal parameters of scale, shape and compactness that were defined after an extensive trail process on the data sets. Nearest neighbour classifier was applied on the segmented images, and then the accuracy assessment was applied. Results show that segmentation parameters have a direct effect on the classification accuracy, and low values of scale-shape combinations produce the highest classification accuracies. Also, compactness parameter was found to be having minimal effect on the construction of image objects, hence it can be set to a constant value in image classification.

  15. Diagnostic support for glaucoma using retinal images: a hybrid image analysis and data mining approach.

    PubMed

    Yu, Jin; Abidi, Syed Sibte Raza; Artes, Paul; McIntyre, Andy; Heywood, Malcolm

    2005-01-01

    The availability of modern imaging techniques such as Confocal Scanning Laser Tomography (CSLT) for capturing high-quality optic nerve images offer the potential for developing automatic and objective methods for diagnosing glaucoma. We present a hybrid approach that features the analysis of CSLT images using moment methods to derive abstract image defining features. The features are then used to train classifers for automatically distinguishing CSLT images of normal and glaucoma patient. As a first, in this paper, we present investigations in feature subset selction methods for reducing the relatively large input space produced by the moment methods. We use neural networks and support vector machines to determine a sub-set of moments that offer high classification accuracy. We demonstratee the efficacy of our methods to discriminate between healthy and glaucomatous optic disks based on shape information automatically derived from optic disk topography and reflectance images.

  16. Personalized structural image analysis in patients with temporal lobe epilepsy.

    PubMed

    Rummel, Christian; Slavova, Nedelina; Seiler, Andrea; Abela, Eugenio; Hauf, Martinus; Burren, Yuliya; Weisstanner, Christian; Vulliemoz, Serge; Seeck, Margitta; Schindler, Kaspar; Wiest, Roland

    2017-09-07

    Volumetric and morphometric studies have demonstrated structural abnormalities related to chronic epilepsies on a cohort- and population-based level. On a single-patient level, specific patterns of atrophy or cortical reorganization may be widespread and heterogeneous but represent potential targets for further personalized image analysis and surgical therapy. The goal of this study was to compare morphometric data analysis in 37 patients with temporal lobe epilepsies with expert-based image analysis, pre-informed by seizure semiology and ictal scalp EEG. Automated image analysis identified abnormalities exceeding expert-determined structural epileptogenic lesions in 86% of datasets. If EEG lateralization and expert MRI readings were congruent, automated analysis detected abnormalities consistent on a lobar and hemispheric level in 82% of datasets. However, in 25% of patients EEG lateralization and expert readings were inconsistent. Automated analysis localized to the site of resection in 60% of datasets in patients who underwent successful epilepsy surgery. Morphometric abnormalities beyond the mesiotemporal structures contributed to subtype characterisation. We conclude that subject-specific morphometric information is in agreement with expert image analysis and scalp EEG in the majority of cases. However, automated image analysis may provide non-invasive additional information in cases with equivocal radiological and neurophysiological findings.

  17. Digital pathology and image analysis in tissue biomarker research.

    PubMed

    Hamilton, Peter W; Bankhead, Peter; Wang, Yinhai; Hutchinson, Ryan; Kieran, Declan; McArt, Darragh G; James, Jacqueline; Salto-Tellez, Manuel

    2014-11-01

    Digital pathology and the adoption of image analysis have grown rapidly in the last few years. This is largely due to the implementation of whole slide scanning, advances in software and computer processing capacity and the increasing importance of tissue-based research for biomarker discovery and stratified medicine. This review sets out the key application areas for digital pathology and image analysis, with a particular focus on research and biomarker discovery. A variety of image analysis applications are reviewed including nuclear morphometry and tissue architecture analysis, but with emphasis on immunohistochemistry and fluorescence analysis of tissue biomarkers. Digital pathology and image analysis have important roles across the drug/companion diagnostic development pipeline including biobanking, molecular pathology, tissue microarray analysis, molecular profiling of tissue and these important developments are reviewed. Underpinning all of these important developments is the need for high quality tissue samples and the impact of pre-analytical variables on tissue research is discussed. This requirement is combined with practical advice on setting up and running a digital pathology laboratory. Finally, we discuss the need to integrate digital image analysis data with epidemiological, clinical and genomic data in order to fully understand the relationship between genotype and phenotype and to drive discovery and the delivery of personalized medicine. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. User image mismatch in anaesthesia alarms: a cognitive systems analysis.

    PubMed

    Raymer, Karen E; Bergström, Johan

    2013-01-01

    In this study, principles of Cognitive Systems Engineering are used to better understand the human-machine interaction manifesting in the use of anaesthesia alarms. The hypothesis is that the design of the machine incorporates built-in assumptions of the user that are discrepant with the anaesthesiologist's self-assessment, creating 'user image mismatch'. Mismatch was interpreted by focusing on the 'user image' as described from the perspectives of both machine and user. The machine-embedded image was interpreted through document analysis. The user-described image was interpreted through user (anaesthesiologist) interviews. Finally, an analysis was conducted in which the machine-embedded and user-described images were contrasted to identify user image mismatch. It is concluded that analysing user image mismatch expands the focus of attention towards macro-elements in the interaction between man and machine. User image mismatch is interpreted to arise from complexity of algorithm design and incongruity between alarm design and tenets of anaesthesia practice. Cognitive system engineering principles are applied to enhance the understanding of the interaction between anaesthesiologist and alarm. The 'user image' is interpreted and contrasted from the perspectives of machine as well as the user. Apparent machine-user mismatch is explored pertaining to specific design features.

  19. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  20. Analysis of radar images by means of digital terrain models

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.; Kobrick, M.

    1984-01-01

    It is pointed out that the importance of digital terrain models in the processing, analysis, and interpretation of remote sensing data is increasing. In investigations related to the study of radar images, digital terrain models can have a particular significance, because radar reflection is a function of the terrain characteristics. A procedure for the analysis and interpretation of radar images is discussed. The procedure is based on a utilization of computer simulation which makes it possible to produce simulated radar images on the basis of a digital terrain model. The simulated radar images are used for the geometric and radiometric rectification of real radar images. A description of the employed procedures is provided, and the obtained results are discussed, taking into account a test area in Northern California.

  1. Adaptive feature enhancement for mammographic images with wavelet multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Chen, Lulin; Chen, Chang W.; Parker, Kevin J.

    1997-10-01

    A novel and computationally efficient approach to an adaptive mammographic image feature enhancement using wavelet-based multiresolution analysis is presented. On wavelet decomposition applied to a given mammographic image, we integrate the information of the tree-structured zero crossings of wavelet coefficients and the information of the low-pass-filtered subimage to enhance the desired image features. A discrete wavelet transform with pyramidal structure is employed to speedup the computation for wavelet decomposition and reconstruction. The spatiofrequency localization property of the wavelet transform is exploited based on the spatial coherence of image and the principle of human psycho-visual mechanism. Preliminary results show that the proposed approach is able to adaptively enhance local edge features, suppress noise, and improve global visualization of mammographic image features. This wavelet- based multiresolution analysis is therefore promising for computerized mass screening of mammograms.

  2. Analysis of radar images by means of digital terrain models

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.; Kobrick, M.

    1984-01-01

    It is pointed out that the importance of digital terrain models in the processing, analysis, and interpretation of remote sensing data is increasing. In investigations related to the study of radar images, digital terrain models can have a particular significance, because radar reflection is a function of the terrain characteristics. A procedure for the analysis and interpretation of radar images is discussed. The procedure is based on a utilization of computer simulation which makes it possible to produce simulated radar images on the basis of a digital terrain model. The simulated radar images are used for the geometric and radiometric rectification of real radar images. A description of the employed procedures is provided, and the obtained results are discussed, taking into account a test area in Northern California.

  3. Segmented infrared image analysis for rotating machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Duan, Lixiang; Yao, Mingchao; Wang, Jinjiang; Bai, Tangbo; Zhang, Laibin

    2016-07-01

    As a noncontact and non-intrusive technique, infrared image analysis becomes promising for machinery defect diagnosis. However, the insignificant information and strong noise in infrared image limit its performance. To address this issue, this paper presents an image segmentation approach to enhance the feature extraction in infrared image analysis. A region selection criterion named dispersion degree is also formulated to discriminate fault representative regions from unrelated background information. Feature extraction and fusion methods are then applied to obtain features from selected regions for further diagnosis. Experimental studies on a rotor fault simulator demonstrate that the presented segmented feature enhancement approach outperforms the one from the original image using both Naïve Bayes classifier and support vector machine.

  4. UBIAS systems for cognitive interpretation and analysis of medical images

    NASA Astrophysics Data System (ADS)

    Ogiela, L.

    2009-06-01

    The main subject of this publication is to present a selected class of cognitive categorisation systems - understanding based image analysis systems (UBIAS) which support analyses of data recorded in the form of images. Cognitive categorisation systems operate by following particular type of thought, cognitive, and reasoning processes which take place in a human mind and which ultimately lead to making an in-depth description of the analysis and reasoning process. The most important element in this analysis and reasoning process is that it occurs both in the human ability cognitive/thinking process and in the system's information/reasoning process that conducts the in-depth interpretation and analysis of data.

  5. The Land Analysis System (LAS) for multispectral image processing

    USGS Publications Warehouse

    Wharton, S. W.; Lu, Y. C.; Quirk, Bruce K.; Oleson, Lyndon R.; Newcomer, J. A.; Irani, Frederick M.

    1988-01-01

    The Land Analysis System (LAS) is an interactive software system available in the public domain for the analysis, display, and management of multispectral and other digital image data. LAS provides over 240 applications functions and utilities, a flexible user interface, complete online and hard-copy documentation, extensive image-data file management, reformatting, conversion utilities, and high-level device independent access to image display hardware. The authors summarize the capabilities of the current release of LAS (version 4.0) and discuss plans for future development. Particular emphasis is given to the issue of system portability and the importance of removing and/or isolating hardware and software dependencies.

  6. Visualization and Analysis of 3D Microscopic Images

    PubMed Central

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain. PMID:22719236

  7. Visualization and analysis of 3D microscopic images.

    PubMed

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain.

  8. Person identification using fractal analysis of retina images

    NASA Astrophysics Data System (ADS)

    Ungureanu, Constantin; Corniencu, Felicia

    2004-10-01

    Biometric is automated method of recognizing a person based on physiological or behavior characteristics. Among the features measured are retina scan, voice, and fingerprint. A retina-based biometric involves the analysis of the blood vessels situated at the back of the eye. In this paper we present a method, which uses the fractal analysis to characterize the retina images. The Fractal Dimension (FD) of retina vessels was measured for a number of 20 images and have been obtained different values of FD for each image. This algorithm provides a good accuracy is cheap and easy to implement.

  9. Multispectral image analysis of bruise age

    NASA Astrophysics Data System (ADS)

    Sprigle, Stephen; Yi, Dingrong; Caspall, Jayme; Linden, Maureen; Kong, Linghua; Duckworth, Mark

    2007-03-01

    The detection and aging of bruises is important within clinical and forensic environments. Traditionally, visual and photographic assessment of bruise color is used to determine age, but this substantially subjective technique has been shown to be inaccurate and unreliable. The purpose of this study was to develop a technique to spectrally-age bruises using a reflective multi-spectral imaging system that minimizes the filtering and hardware requirements while achieving acceptable accuracy. This approach will then be incorporated into a handheld, point-of-care technology that is clinically-viable and affordable. Sixteen bruises from elder residents of a long term care facility were imaged over time. A multi-spectral system collected images through eleven narrow band (~10 nm FWHM) filters having center wavelengths ranging between 370-970 nm corresponding to specific skin and blood chromophores. Normalized bruise reflectance (NBR)- defined as the ratio of optical reflectance coefficient of bruised skin over that of normal skin- was calculated for all bruises at all wavelengths. The smallest mean NBR, regardless of bruise age, was found at wavelength between 555 & 577nm suggesting that contrast in bruises are from the hemoglobin, and that they linger for a long duration. A contrast metric, based on the NBR at 460nm and 650nm, was found to be sensitive to age and requires further investigation. Overall, the study identified four key wavelengths that have promise to characterize bruise age. However, the high variability across the bruises imaged in this study complicates the development of a handheld detection system until additional data is available.

  10. Computerized microscopic image analysis of follicular lymphoma

    NASA Astrophysics Data System (ADS)

    Sertel, Olcay; Kong, Jun; Lozanski, Gerard; Catalyurek, Umit; Saltz, Joel H.; Gurcan, Metin N.

    2008-03-01

    Follicular Lymphoma (FL) is a cancer arising from the lymphatic system. Originating from follicle center B cells, FL is mainly comprised of centrocytes (usually middle-to-small sized cells) and centroblasts (relatively large malignant cells). According to the World Health Organization's recommendations, there are three histological grades of FL characterized by the number of centroblasts per high-power field (hpf) of area 0.159 mm2. In current practice, these cells are manually counted from ten representative fields of follicles after visual examination of hematoxylin and eosin (H&E) stained slides by pathologists. Several studies clearly demonstrate the poor reproducibility of this grading system with very low inter-reader agreement. In this study, we are developing a computerized system to assist pathologists with this process. A hybrid approach that combines information from several slides with different stains has been developed. Thus, follicles are first detected from digitized microscopy images with immunohistochemistry (IHC) stains, (i.e., CD10 and CD20). The average sensitivity and specificity of the follicle detection tested on 30 images at 2×, 4× and 8× magnifications are 85.5+/-9.8% and 92.5+/-4.0%, respectively. Since the centroblasts detection is carried out in the H&E-stained slides, the follicles in the IHC-stained images are mapped to H&E-stained counterparts. To evaluate the centroblast differentiation capabilities of the system, 11 hpf images have been marked by an experienced pathologist who identified 41 centroblast cells and 53 non-centroblast cells. A non-supervised clustering process differentiates the centroblast cells from noncentroblast cells, resulting in 92.68% sensitivity and 90.57% specificity.

  11. Micro imaging analysis for osteoporosis assessment

    NASA Astrophysics Data System (ADS)

    Lima, I.; Farias, M. L. F.; Percegoni, N.; Rosenthal, D.; de Assis, J. T.; Anjos, M. J.; Lopes, R. T.

    2010-03-01

    Characterization of trabeculae structures is one of the most important applications of imaging techniques in the biomedical area. The aim of this study was to investigate structure modifications in trabecular and cortical bones using non destructive techniques such as X-ray microtomography, X-ray microfluorescence by synchrotron radiation and scanning electron microscopy. The results obtained reveal the potential of this computational technique to verify the capability of characterization of internal bone structures.

  12. Real-time video-image analysis

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Rayfield, M. J.; Yakimovsky, Y.

    1979-01-01

    Digitizer and storage system allow rapid random access to video data by computer. RAPID (random-access picture digitizer) uses two commercially-available, charge-injection, solid-state TV cameras as sensors. It can continuously update its memory with each frame of video signal, or it can hold given frame in memory. In either mode, it generates composite video output signal representing digitized image in memory.

  13. Analysis of imaging quality under the systematic parameters for thermal imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Jin, Weiqi

    2009-07-01

    The integration of thermal imaging system and radar system could increase the range of target identification as well as strengthen the accuracy and reliability of detection, which is a state-of-the-art and mainstream integrated system to search any invasive target and guard homeland security. When it works, there is, however, one defect existing of what the thermal imaging system would produce affected images which could cause serious consequences when searching and detecting. In this paper, we study and reveal the reason why and how the affected images would occur utilizing the principle of lightwave before establishing mathematical imaging model which could meet the course of ray transmitting. In the further analysis, we give special attentions to the systematic parameters of the model, and analyse in detail all parameters which could possibly affect the imaging process and the function how it does respectively. With comprehensive research, we obtain detailed information about the regulation of diffractive phenomena shaped by these parameters. Analytical results have been convinced through the comparison between experimental images and MATLAB simulated images, while simulated images based on the parameters we revised to judge our expectation have good comparability with images acquired in reality.

  14. An investigation of image compression on NIIRS rating degradation through automated image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Hua-Mei; Blasch, Erik; Pham, Khanh; Wang, Zhonghai; Chen, Genshe

    2016-05-01

    The National Imagery Interpretability Rating Scale (NIIRS) is a subjective quantification of static image widely adopted by the Geographic Information System (GIS) community. Efforts have been made to relate NIIRS image quality to sensor parameters using the general image quality equations (GIQE), which make it possible to automatically predict the NIIRS rating of an image through automated image analysis. In this paper, we present an automated procedure to extract line edge profile based on which the NIIRS rating of a given image can be estimated through the GIQEs if the ground sampling distance (GSD) is known. Steps involved include straight edge detection, edge stripes determination, and edge intensity determination, among others. Next, we show how to employ GIQEs to estimate NIIRS degradation without knowing the ground truth GSD and investigate the effects of image compression on the degradation of an image's NIIRS rating. Specifically, we consider JPEG and JPEG2000 image compression standards. The extensive experimental results demonstrate the effect of image compression on the ground sampling distance and relative edge response, which are the major factors effecting NIIRS rating.

  15. Simulation and analysis about noisy range images of laser radar

    NASA Astrophysics Data System (ADS)

    Zhao, Mingbo; He, Jun; Fu, Qiang; Xi, Dan

    2011-06-01

    A measured range image of imaging laser radar (ladar) is usually disturbed by dropouts and outliers. For the difficulty of obtaining measured data and controlling noise level of dropouts and outliers, a new simulation method for range image with noise is proposed. Based on the noise formation mechanism of ladar range image, an accurate ladar range imaging model is formulated, including three major influencing factors: speckle, atmospheric turbulence and receiver noise. The noisy range images under different scenarios are obtained using MATLABTM. Analysis on simulation results reveals that: (1) Despite of detection strategy, the speckle, the atmospheric turbulence and the receiver noise are major factors which cause dropouts and outliers. (2) The receiver noise itself has limited effect on outliers. However, if other factors (speckle, atmospheric turbulence, etc.) also exist, the effect will be sharply enhanced. (3) Both dropouts and outliers exist in background and target regions.

  16. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  17. Inappropriateness of breast imaging: cost analysis.

    PubMed

    Pistolese, Chiara Adriana; Ciarrapico, Anna Micaela; della Gatta, Francesca; Simonetti, Giovanni

    2013-09-01

    The aim of this study was to assess how an incorrect indication for an examination may affect the diagnostic workup and diagnosis as well as healthcare expenditure. We considered all the requests for breast imaging (mammography, ultrasound and magnetic resonance imaging) received by our radiology department between October 2010 and December 2010, and assessed their appropriateness based on the patient's age and the clinical question, if present. We then analysed the unnecessary costs resulting from inappropriate requests. Out of a total of 1500 requests for ultrasound examination, the request was appropriate in 855 (57%) cases; out of a total of 2350 requests for mammography, the request was appropriate in 493 (21%) cases; out of a total of 100 requests for magnetic resonance imaging, the request was appropriate in 83 (83%) cases. The cost deriving from inappropriate requests was 51,235.04 Euros. Improving the timeliness of diagnosis is an important goal to be pursued by enhancing the available health services, improving communication and coordination of the different professionals involved and optimising diagnostic pathways in order to reduce healthcare spending.

  18. Advanced Imaging Techniques for Multiphase Flows Analysis

    NASA Astrophysics Data System (ADS)

    Amoresano, A.; Langella, G.; Di Santo, M.; Iodice, P.

    2017-08-01

    Advanced numerical techniques, such as fuzzy logic and neural networks have been applied in this work to digital images acquired on two applications, a centrifugal pump and a stationary spray in order to define, in a stochastic way, the gas-liquid interface evolution. Starting from the numeric matrix representing the image it is possible to characterize geometrical parameters and the time evolution of the jet. The algorithm used works with the fuzzy logic concept to binarize the chromatist of the pixels, depending them, by using the difference of the light scattering for the gas and the liquid phase.. Starting from a primary fixed threshold, the applied technique, can select the ‘gas’ pixel from the ‘liquid’ pixel and so it is possible define the first most probably boundary lines of the spray. Acquiring continuously the images, fixing a frame rate, a most fine threshold can be select and, at the limit, the most probably geometrical parameters of the jet can be detected.

  19. Improving lip wrinkles: lipstick-related image analysis.

    PubMed

    Ryu, Jong-Seong; Park, Sun-Gyoo; Kwak, Taek-Jong; Chang, Min-Youl; Park, Moon-Eok; Choi, Khee-Hwan; Sung, Kyung-Hye; Shin, Hyun-Jong; Lee, Cheon-Koo; Kang, Yun-Seok; Yoon, Moung-Seok; Rang, Moon-Jeong; Kim, Seong-Jin

    2005-08-01

    The appearance of lip wrinkles is problematic if it is adversely influenced by lipstick make-up causing incomplete color tone, spread phenomenon and pigment remnants. It is mandatory to develop an objective assessment method for lip wrinkle status by which the potential of wrinkle-improving products to lips can be screened. The present study is aimed at finding out the useful parameters from the image analysis of lip wrinkles that is affected by lipstick application. The digital photograph image of lips before and after lipstick application was assessed from 20 female volunteers. Color tone was measured by Hue, Saturation and Intensity parameters, and time-related pigment spread was calculated by the area over vermilion border by image-analysis software (Image-Pro). The efficacy of wrinkle-improving lipstick containing asiaticoside was evaluated from 50 women by using subjective and objective methods including image analysis in a double-blind placebo-controlled fashion. The color tone and spread phenomenon after lipstick make-up were remarkably affected by lip wrinkles. The level of standard deviation by saturation value of image-analysis software was revealed as a good parameter for lip wrinkles. By using the lipstick containing asiaticoside for 8 weeks, the change of visual grading scores and replica analysis indicated the wrinkle-improving effect. As the depth and number of wrinkles were reduced, the lipstick make-up appearance by image analysis also improved significantly. The lip wrinkle pattern together with lipstick make-up can be evaluated by the image-analysis system in addition to traditional assessment methods. Thus, this evaluation system is expected to test the efficacy of wrinkle-reducing lipstick that was not described in previous dermatologic clinical studies.

  20. Statistical analysis of dynamic sequences for functional imaging

    NASA Astrophysics Data System (ADS)

    Kao, Chien-Min; Chen, Chin-Tu; Wernick, Miles N.

    2000-04-01

    Factor analysis of medical image sequences (FAMIS), in which one concerns the problem of simultaneous identification of homogeneous regions (factor images) and the characteristic temporal variations (factors) inside these regions from a temporal sequence of images by statistical analysis, is one of the major challenges in medical imaging. In this research, we contribute to this important area of research by proposing a two-step approach. First, we study the use of the noise- adjusted principal component (NAPC) analysis developed by Lee et. al. for identifying the characteristic temporal variations in dynamic scans acquired by PET and MRI. NAPC allows us to effectively reject data noise and substantially reduce data dimension based on signal-to-noise ratio consideration. Subsequently, a simple spatial analysis based on the criteria of minimum spatial overlapping and non-negativity of the factor images is applied for extraction of the factors and factor images. In our simulation study, our preliminary results indicate that the proposed approach can accurately identify the factor images. However, the factors are not completely separated.

  1. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  2. Bayesian principal geodesic analysis in diffeomorphic image registration.

    PubMed

    Zhang, Miaomiao; Fletcher, P Thomas

    2014-01-01

    Computing a concise representation of the anatomical variability found in large sets of images is an important first step in many statistical shape analyses. In this paper, we present a generative Bayesian approach for automatic dimensionality reduction of shape variability represented through diffeomorphic mappings. To achieve this, we develop a latent variable model for principal geodesic analysis (PGA) that provides a probabilistic framework for factor analysis on diffeomorphisms. Our key contribution is a Bayesian inference procedure for model parameter estimation and simultaneous detection of the effective dimensionality of the latent space. We evaluate our proposed model for atlas and principal geodesic estimation on the OASIS brain database of magnetic resonance images. We show that the automatically selected latent dimensions from our model are able to reconstruct unseen brain images with lower error than equivalent linear principal components analysis (LPCA) models in the image space, and it also outperforms tangent space PCA (TPCA) models in the diffeomorphism setting.

  3. New approach to gallbladder ultrasonic images analysis and lesions recognition.

    PubMed

    Bodzioch, Sławomir; Ogiela, Marek R

    2009-03-01

    This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards detection of disease symptoms on processed images. First, in this paper, there is presented a new method of filtering gallbladder contours from USG images. A major stage in this filtration is to segment and section off areas occupied by the said organ. In most cases this procedure is based on filtration that plays a key role in the process of diagnosing pathological changes. Unfortunately ultrasound images present among the most troublesome methods of analysis owing to the echogenic inconsistency of structures under observation. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours. The algorithm is based on rank filtration, as well as on the analysis of histogram sections on tested organs. The second part concerns detecting lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. Usually the final stage is to make a diagnosis based on the detected symptoms. This last stage can be carried out through either dedicated expert systems or more classic pattern analysis approach like using rules to determine illness basing on detected symptoms. This paper discusses the pattern analysis algorithms for gallbladder image interpretation towards classification of the most frequent illness symptoms of this organ.

  4. Object density-based image segmentation and its applications in biomedical image analysis.

    PubMed

    Yu, Jinhua; Tan, Jinglu

    2009-12-01

    In many applications of medical image analysis, the density of an object is the most important feature for isolating an area of interest (image segmentation). In this research, an object density-based image segmentation methodology is developed, which incorporates intensity-based, edge-based and texture-based segmentation techniques. The proposed method consists of three main stages: preprocessing, object segmentation and final segmentation. Image enhancement, noise reduction and layer-of-interest extraction are several subtasks of preprocessing. Object segmentation utilizes a marker-controlled watershed technique to identify each object of interest (OI) from the background. A marker estimation method is proposed to minimize over-segmentation resulting from the watershed algorithm. Object segmentation provides an accurate density estimation of OI which is used to guide the subsequent segmentation steps. The final stage converts the distribution of OI into textural energy by using fractal dimension analysis. An energy-driven active contour procedure is designed to delineate the area with desired object density. Experimental results show that the proposed method is 98% accurate in segmenting synthetic images. Segmentation of microscopic images and ultrasound images shows the potential utility of the proposed method in different applications of medical image processing.

  5. Embedded signal approach to image texture reproduction analysis

    NASA Astrophysics Data System (ADS)

    Burns, Peter D.; Baxter, Donald

    2014-01-01

    Since image processing aimed at reducing image noise can also remove important texture, standard methods for evaluating the capture and retention of image texture are currently being developed. Concurrently, the evolution of the intelligence and performance of camera noise-reduction (NR) algorithms poses a challenge for these protocols. Many NR algorithms are `content-aware', which can lead to different levels of NR being applied to various regions within the same digital image. We review the requirements for improved texture measurement. The challenge is to evaluate image signal (texture) content without having a test signal interfere with the processing of the natural scene. We describe an approach to texture reproduction analysis that uses embedded periodic test signals within image texture regions. We describe a target that uses natural image texture combined with a multi-frequency periodic signal. This low-amplitude signal region is embedded in the texture image. Two approaches for embedding periodic test signals in image texture are described. The stacked sine-wave method uses a single combined, or stacked, region with several frequency components. The second method uses a low-amplitude version of the IEC-61146-1 sine-wave multi-burst chart, combined with image texture. A 3x3 grid of smaller regions, each with a single frequency, constitutes the test target. Both methods were evaluated using a simulated digital camera capture-path that included detector noise and optical MTF, for a range of camera exposure/ISO settings. Two types of image texture were used with the method, natural grass and a computed `dead-leaves' region composed of random circles. The embedded-signal methods tested for accuracy with respect to image noise over a wide range of levels, and then further in an evaluation of an adaptive noise-reduction image processing.

  6. Automated image analysis of nuclear atypia in high-power field histopathological image.

    PubMed

    Lu, Cheng; Ji, Mengyao; Ma, Zhen; Mandal, Mrinal

    2015-06-01

    We developed a computer-aided technique to study nuclear atypia classification in high-power field haematoxylin and eosin stained images. An automated technique for nuclear atypia score (NAS) calculation is proposed. The proposed technique uses sophisticated digital image analysis and machine-learning methods to measure the NAS for haematoxylin and eosin stained images. The proposed technique first segments all nuclei regions. A set of morphology and texture features is extracted from presegmented nuclei regions. The histogram of each feature is then calculated to characterize the statistical information of the nuclei. Finally, a support vector machine classifier is applied to classify a high-power field image into different nuclear atypia classes. A set of 1188 digital images was analysed in the experiment. We successfully differentiated the high-power field image with NAS1 versus non-NAS1, NAS2 versus non-NAS2 and NAS3 versus non-NAS3, with area under receiver-operating characteristic curve of 0.90, 0.86 and 0.87, respectively. In three classes evaluation, the average classification accuracy was 78.79%. We found that texture-based feature provides best performance for the classification. The automated technique is able to quantify statistical features that may be difficult to be measured by human and demonstrates the future potentials of automated image analysis technique in histopathology analysis. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  7. Prospective Evaluation of Multimodal Optical Imaging with Automated Image Analysis to Detect Oral Neoplasia In Vivo.

    PubMed

    Quang, Timothy; Tran, Emily Q; Schwarz, Richard A; Williams, Michelle D; Vigneswaran, Nadarajah; Gillenwater, Ann M; Richards-Kortum, Rebecca

    2017-10-01

    The 5-year survival rate for patients with oral cancer remains low, in part because diagnosis often occurs at a late stage. Early and accurate identification of oral high-grade dysplasia and cancer can help improve patient outcomes. Multimodal optical imaging is an adjunctive diagnostic technique in which autofluorescence imaging is used to identify high-risk regions within the oral cavity, followed by high-resolution microendoscopy to confirm or rule out the presence of neoplasia. Multimodal optical images were obtained from 206 sites in 100 patients. Histologic diagnosis, either from a punch biopsy or an excised surgical specimen, was used as the gold standard for all sites. Histopathologic diagnoses of moderate dysplasia or worse were considered neoplastic. Images from 92 sites in the first 30 patients were used as a training set to develop automated image analysis methods for identification of neoplasia. Diagnostic performance was evaluated prospectively using images from 114 sites in the remaining 70 patients as a test set. In the training set, multimodal optical imaging with automated image analysis correctly classified 95% of nonneoplastic sites and 94% of neoplastic sites. Among the 56 sites in the test set that were biopsied, multimodal optical imaging correctly classified 100% of nonneoplastic sites and 85% of neoplastic sites. Among the 58 sites in the test set that corresponded to a surgical specimen, multimodal imaging correctly classified 100% of nonneoplastic sites and 61% of neoplastic sites. These findings support the potential of multimodal optical imaging to aid in the early detection of oral cancer. Cancer Prev Res; 10(10); 563-70. ©2017 AACR. ©2017 American Association for Cancer Research.

  8. Assessment of cluster yield components by image analysis.

    PubMed

    Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose

    2015-04-01

    Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.

  9. Imaging for dismantlement verification: information management and analysis algorithms

    SciTech Connect

    Seifert, Allen; Miller, Erin A.; Myjak, Mitchell J.; Robinson, Sean M.; Jarman, Kenneth D.; Misner, Alex C.; Pitts, W. Karl; Woodring, Mitchell L.

    2010-09-01

    The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute. However, this process must be performed with care. Computing the perimeter, area, and intensity of an object, for example, might reveal sensitive information relating to shape, size, and material composition. This paper presents three analysis algorithms that reduce full image information to non-sensitive feature information. Ultimately, the algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We evaluate the algorithms on both their technical performance in image analysis, and their application with and without an explicitly constructed information barrier. The underlying images can be highly detailed, since they are dynamically generated behind the information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography.

  10. Cnn Based Retinal Image Upscaling Using Zero Component Analysis

    NASA Astrophysics Data System (ADS)

    Nasonov, A.; Chesnakov, K.; Krylov, A.

    2017-05-01

    The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.

  11. Multivariate image analysis for process monitoring and control

    NASA Astrophysics Data System (ADS)

    MacGregor, John F.; Bharati, Manish H.; Yu, Honglu

    2001-02-01

    Information from on-line imaging sensors has great potential for the monitoring and control of quality in spatially distributed systems. The major difficulty lies in the efficient extraction of information from the images, information such as the frequencies of occurrence of specific and often subtle features, and their locations in the product or process space. This paper presents an overview of multivariate image analysis methods based on Principal Component Analysis and Partial Least Squares for decomposing the highly correlated data present in multi-spectral images. The frequencies of occurrence of certain features in the image, regardless of their spatial locations, can be easily monitored in the space of the principal components. The spatial locations of these features can then be obtained by transposing highlighted pixels from the PC score space into the original image space. In this manner it is possible to easily detect and locate even very subtle features from on-line imaging sensors for the purpose of statistical process control or feedback control of spatial processes. The concepts and potential of the approach are illustrated using a sequence of LANDSAT satellite multispectral images, depicting a pass over a certain region of the earth's surface. Potential applications in industrial process monitoring using these methods will be discussed from a variety of areas such as pulp and paper sheet products, lumber and polymer films.

  12. A TSVD Analysis of Microwave Inverse Scattering for Breast Imaging

    PubMed Central

    Shea, Jacob D.; Van Veen, Barry D.; Hagness, Susan C.

    2013-01-01

    A variety of methods have been applied to the inverse scattering problem for breast imaging at microwave frequencies. While many techniques have been leveraged toward a microwave imaging solution, they are all fundamentally dependent on the quality of the scattering data. Evaluating and optimizing the information contained in the data are, therefore, instrumental in understanding and achieving optimal performance from any particular imaging method. In this paper, a method of analysis is employed for the evaluation of the information contained in simulated scattering data from a known dielectric profile. The method estimates optimal imaging performance by mapping the data through the inverse of the scattering system. The inverse is computed by truncated singular-value decomposition of a system of scattering equations. The equations are made linear by use of the exact total fields in the imaging volume, which are available in the computational domain. The analysis is applied to anatomically realistic numerical breast phantoms. The utility of the method is demonstrated for a given imaging system through the analysis of various considerations in system design and problem formulation. The method offers an avenue for decoupling the problem of data selection from the problem of image formation from that data. PMID:22113770

  13. A TSVD analysis of microwave inverse scattering for breast imaging.

    PubMed

    Shea, Jacob D; Van Veen, Barry D; Hagness, Susan C

    2012-04-01

    A variety of methods have been applied to the inverse scattering problem for breast imaging at microwave frequencies. While many techniques have been leveraged toward a microwave imaging solution, they are all fundamentally dependent on the quality of the scattering data. Evaluating and optimizing the information contained in the data are, therefore, instrumental in understanding and achieving optimal performance from any particular imaging method. In this paper, a method of analysis is employed for the evaluation of the information contained in simulated scattering data from a known dielectric profile. The method estimates optimal imaging performance by mapping the data through the inverse of the scattering system. The inverse is computed by truncated singular-value decomposition of a system of scattering equations. The equations are made linear by use of the exact total fields in the imaging volume, which are available in the computational domain. The analysis is applied to anatomically realistic numerical breast phantoms. The utility of the method is demonstrated for a given imaging system through the analysis of various considerations in system design and problem formulation. The method offers an avenue for decoupling the problem of data selection from the problem of image formation from that data.

  14. Shortwave Infrared Imaging Spectroscopy for Analysis of Ancient Paintings.

    PubMed

    Wu, Taixia; Li, Guanghua; Yang, Zehua; Zhang, Hongming; Lei, Yong; Wang, Nan; Zhang, Lifu

    2016-11-21

    Spectral analysis is one of the main non-destructive techniques used to examine cultural relics. Hyperspectral imaging technology, especially on the shortwave infrared (SWIR) band, can clearly extract information from paintings, such as color, pigment composition, damage characteristics, and painting techniques. All of these characteristics have significant scientific and practical value in the study of ancient paintings and other relics and in their protection and restoration. In this study, an ancient painting, numbered Gu-6541, which had been found in the Forbidden City, served as a sample. A ground-based SWIR imaging spectrometer was used to produce hyperspectral images with high spatial and spectral resolution. Results indicated that SWIR imaging spectral data greatly facilitates the extraction of line features used in drafting, even using a single band image. It can be used to identify and classify mineral pigments used in paintings. These images can detect alterations and traces of daub used in painting corrections and, combined with hyperspectral data analysis methods such as band combination or principal component analysis, such information can be extracted to highlight outcomes of interest. In brief, the SWIR imaging spectral technique was found to have a highly favorable effect on the extraction of line features from drawings and on the identification of colors, classification of paintings, and extraction of hidden information.

  15. Independent component analysis for artefact separation in astrophysical images.

    PubMed

    Funaro, Maria; Oja, Erkki; Valpola, Harri

    2003-01-01

    In this paper, we demonstrate that independent component analysis, a novel signal processing technique, is a powerful method for separating artefacts from astrophysical image data. When studying far-out galaxies from a series of consequent telescope images, there are several sources for artefacts that influence all the images, such as camera noise, atmospheric fluctuations and disturbances, cosmic rays, and stars in our own galaxy. In the analysis of astrophysical image data it is very important to implement techniques which are able to detect them with great accuracy, to avoid the possible physical events from being eliminated from the data along with the artefacts. For this problem, the linear ICA model holds very accurately because such artefacts are all theoretically independent of each other and of the physical events. Using image data on the M31 Galaxy, it is shown that several artefacts can be detected and recognized based on their temporal pixel luminosity profiles and independent component images. The obtained separation is good and the method is very fast. It is also shown that ICA outperforms principal component analysis in this task. For these reasons, ICA might provide a very useful pre-processing technique for the large amounts of available telescope image data.

  16. Determination of Mean Temperatures of Normal Whole Breast and Breast Quadrants by Infrared Imaging and Image Analysis

    DTIC Science & Technology

    2007-11-02

    Now with the advent of uncooled staring array digital infrared imaging systems (Prism 2000; Bioyear Croup, Houston, TX) and image analysis , numerical...patients. These results are consistent with our previous results with both objective image analysis and subjective visual analysis (15% of screened

  17. Fake fingerprint detection based on image analysis

    NASA Astrophysics Data System (ADS)

    Jin, Sang-il; Bae, You-suk; Maeng, Hyun-ju; Lee, Hyun-suk

    2010-01-01

    Fingerprint recognition systems have become prevalent in various security applications. However, recent studies have shown that it is not difficult to deceive the system with fake fingerprints made of silicon or gelatin. The fake fingerprints have almost the same ridge-valley patterns as ones of genuine fingerprints so that conventional systems are unable to detect fake fingerprints without a particular detection method. Many previous works against fake fingers required extra sensors; thus, they lacked practicality. This paper proposes a practical and effective method that detects fake fingerprints, using only an image sensor. Two criteria are introduced to differentiate genuine and fake fingerprints: the histogram distance and Fourier spectrum distance. In the proposed method, after identifying an input fingerprint of a user, the system computes two distances between the input and the reference that comes from the registered fingerprints of the user. Depending on the two distances, the system classifies the input as a genuine fingerprint or a fake. In the experiment, 2,400 fingerprint images including 1,600 fakes were tested, and the proposed method has shown a high recognition rate of 95%. The fake fingerprints were all accepted by a commercial system; thus, the use of these fake fingerprints qualifies the experiment.

  18. Ballistics projectile image analysis for firearm identification.

    PubMed

    Li, Dongguang

    2006-10-01

    This paper is based upon the observation that, when a bullet is fired, it creates characteristic markings on the cartridge case and projectile. From these markings, over 30 different features can be distinguished, which, in combination, produce a "fingerprint" for a firearm. By analyzing features within such a set of firearm fingerprints, it will be possible to identify not only the type and model of a firearm, but also each and every individual weapon just as effectively as human fingerprint identification. A new analytic system based on the fast Fourier transform for identifying projectile specimens by the line-scan imaging technique is proposed in this paper. This paper develops optical, photonic, and mechanical techniques to map the topography of the surfaces of forensic projectiles for the purpose of identification. Experiments discussed in this paper are performed on images acquired from 16 various weapons. Experimental results show that the proposed system can be used for firearm identification efficiently and precisely through digitizing and analyzing the fired projectiles specimens.

  19. GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.

    PubMed

    Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A

    2017-03-01

    We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.

  20. The Spectral Image Processing System (SIPS) - Interactive visualization and analysis of imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1993-01-01

    The Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, has developed a prototype interactive software system called the Spectral Image Processing System (SIPS) using IDL (the Interactive Data Language) on UNIX-based workstations. SIPS is designed to take advantage of the combination of high spectral resolution and spatial data presentation unique to imaging spectrometers. It streamlines analysis of these data by allowing scientists to rapidly interact with entire datasets. SIPS provides visualization tools for rapid exploratory analysis and numerical tools for quantitative modeling. The user interface is X-Windows-based, user friendly, and provides 'point and click' operation. SIPS is being used for multidisciplinary research concentrating on use of physically based analysis methods to enhance scientific results from imaging spectrometer data. The objective of this continuing effort is to develop operational techniques for quantitative analysis of imaging spectrometer data and to make them available to the scientific community prior to the launch of imaging spectrometer satellite systems such as the Earth Observing System (EOS) High Resolution Imaging Spectrometer (HIRIS).

  1. Image analysis tools and emerging algorithms for expression proteomics

    PubMed Central

    English, Jane A.; Lisacek, Frederique; Morris, Jeffrey S.; Yang, Guang-Zhong; Dunn, Michael J.

    2012-01-01

    Since their origins in academic endeavours in the 1970s, computational analysis tools have matured into a number of established commercial packages that underpin research in expression proteomics. In this paper we describe the image analysis pipeline for the established 2-D Gel Electrophoresis (2-DE) technique of protein separation, and by first covering signal analysis for Mass Spectrometry (MS), we also explain the current image analysis workflow for the emerging high-throughput ‘shotgun’ proteomics platform of Liquid Chromatography coupled to MS (LC/MS). The bioinformatics challenges for both methods are illustrated and compared, whilst existing commercial and academic packages and their workflows are described from both a user’s and a technical perspective. Attention is given to the importance of sound statistical treatment of the resultant quantifications in the search for differential expression. Despite wide availability of proteomics software, a number of challenges have yet to be overcome regarding algorithm accuracy, objectivity and automation, generally due to deterministic spot-centric approaches that discard information early in the pipeline, propagating errors. We review recent advances in signal and image analysis algorithms in 2-DE, MS, LC/MS and Imaging MS. Particular attention is given to wavelet techniques, automated image-based alignment and differential analysis in 2-DE, Bayesian peak mixture models and functional mixed modelling in MS, and group-wise consensus alignment methods for LC/MS. PMID:21046614

  2. Spectral mixture analysis of EELS spectrum-images.

    PubMed

    Dobigeon, Nicolas; Brun, Nathalie

    2012-09-01

    Recent advances in detectors and computer science have enabled the acquisition and the processing of multidimensional datasets, in particular in the field of spectral imaging. Benefiting from these new developments, Earth scientists try to recover the reflectance spectra of macroscopic materials (e.g., water, grass, mineral types…) present in an observed scene and to estimate their respective proportions in each mixed pixel of the acquired image. This task is usually referred to as spectral mixture analysis or spectral unmixing (SU). SU aims at decomposing the measured pixel spectrum into a collection of constituent spectra, called endmembers, and a set of corresponding fractions (abundances) that indicate the proportion of each endmember present in the pixel. Similarly, when processing spectrum-images, microscopists usually try to map elemental, physical and chemical state information of a given material. This paper reports how a SU algorithm dedicated to remote sensing hyperspectral images can be successfully applied to analyze spectrum-image resulting from electron energy-loss spectroscopy (EELS). SU generally overcomes standard limitations inherent to other multivariate statistical analysis methods, such as principal component analysis (PCA) or independent component analysis (ICA), that have been previously used to analyze EELS maps. Indeed, ICA and PCA may perform poorly for linear spectral mixture analysis due to the strong dependence between the abundances of the different materials. One example is presented here to demonstrate the potential of this technique for EELS analysis. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Quantitative sonographic image analysis for hepatic nodules: a pilot study.

    PubMed

    Matsumoto, Naoki; Ogawa, Masahiro; Takayasu, Kentaro; Hirayama, Midori; Miura, Takao; Shiozawa, Katsuhiko; Abe, Masahisa; Nakagawara, Hiroshi; Moriyama, Mitsuhiko; Udagawa, Seiichi

    2015-10-01

    The aim of this study was to investigate the feasibility of quantitative image analysis to differentiate hepatic nodules on gray-scale sonographic images. We retrospectively evaluated 35 nodules from 31 patients with hepatocellular carcinoma (HCC), 60 nodules from 58 patients with liver hemangioma, and 22 nodules from 22 patients with liver metastasis. Gray-scale sonographic images were evaluated with subjective judgment and image analysis using ImageJ software. Reviewers classified the shape of nodules as irregular or round, and the surface of nodules as rough or smooth. Circularity values were lower in the irregular group than in the round group (median 0.823, 0.892; range 0.641-0.915, 0.784-0.932, respectively; P = 3.21 × 10(-10)). Solidity values were lower in the rough group than in the smooth group (median 0.957, 0.968; range 0.894-0.986, 0.933-0.988, respectively; P = 1.53 × 10(-4)). The HCC group had higher circularity and solidity values than the hemangioma group. The HCC and liver metastasis groups had lower median, mean, modal, and minimum gray values than the hemangioma group. Multivariate analysis showed circularity [standardized odds ratio (OR), 2.077; 95 % confidential interval (CI) = 1.295-3.331; P = 0.002] and minimum gray value (OR 0.482; 95 % CI = 0.956-0.990; P = 0.001) as factors predictive of malignancy. The combination of subjective judgment and image analysis provided 58.3 % sensitivity and 89.5 % specificity with AUC = 0.739, representing an improvement over subjective judgment alone (68.4 % sensitivity, 75.0 % specificity, AUC = 0.701) (P = 0.008). Quantitative image analysis for ultrasonic images of hepatic nodules may correlate with subjective judgment in predicting malignancy.

  4. Quantitative analysis of in vivo confocal microscopy images: a review.

    PubMed

    Patel, Dipika V; McGhee, Charles N

    2013-01-01

    In vivo confocal microscopy (IVCM) is a non-invasive method of examining the living human cornea. The recent trend towards quantitative studies using IVCM has led to the development of a variety of methods for quantifying image parameters. When selecting IVCM images for quantitative analysis, it is important to be consistent regarding the location, depth, and quality of images. All images should be de-identified, randomized, and calibrated prior to analysis. Numerous image analysis software are available, each with their own advantages and disadvantages. Criteria for analyzing corneal epithelium, sub-basal nerves, keratocytes, endothelium, and immune/inflammatory cells have been developed, although there is inconsistency among research groups regarding parameter definition. The quantification of stromal nerve parameters, however, remains a challenge. Most studies report lower inter-observer repeatability compared with intra-observer repeatability, and observer experience is known to be an important factor. Standardization of IVCM image analysis through the use of a reading center would be crucial for any future large, multi-centre clinical trials using IVCM.

  5. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  6. Visual analysis of the computer simulation for both imaging and non-imaging optical systems

    NASA Astrophysics Data System (ADS)

    Barladian, B. K.; Potemin, I. S.; Zhdanov, D. D.; Voloboy, A. G.; Shapiro, L. S.; Valiev, I. V.; Birukov, E. D.

    2016-10-01

    Typical results of the optic simulation are images generated on the virtual sensors of various kinds. As a rule, these images represent two-dimensional distribution of the light values in Cartesian coordinates (luminance, illuminance) or in polar coordinates (luminous intensity). Using the virtual sensors allows making the calculation and design of different kinds of illumination devices, providing stray light analysis, synthesizing of photorealistic images of three-dimensional scenes under the complex illumination generated with optical systems, etc. Based on rich experience in the development and practical using of computer systems of virtual prototyping and photorealistic visualization the authors formulated a number of basic requirements for the visualization and analysis of the results of light simulations represented as two-dimensional distribution of luminance, illuminance and luminous intensity values. The requirements include the tone mapping operators, pseudo color imaging, visualization of the spherical panorama, regression analysis, the analysis of the image sections and regions, analysis of pixel values, the image data export, etc. All those requirements were successfully satisfied in designed software component for visual analysis of the light simulation results. The module "LumiVue" is an integral part of "Lumicept" modeling system and the corresponding plug-in of computer-aided design and support for CATIA product. A number of visual examples of analysis of calculated two-dimensional distribution of luminous intensity, illuminance and luminance illustrate the article. The examples are results of simulation and design of lighting optical systems, secondary optics for LEDs, stray light analysis, virtual prototyping and photorealistic rendering.

  7. Analysis of image quality parameter of conventional and dental radiographic digital images.

    PubMed

    Mayo, P; Ródenas, F; Verdu, G; Campayo, J M; Gallardo, S

    2010-01-01

    The image quality obtained by a radiographic equipment is very useful to characterize the physical properties of the image radiographic chain, in a quality control of the radiographic equipment. In the radiographic technique it is necessary that the evaluation of the image can guarantee the constancy of its quality to carry out a suitable diagnosis. In this work we have designed some radiographic phantoms for different radiographic digital devices, as dental, conventional, equipments with computed radiography (phosphor plate) and direct radiography (sensor) technology. Additionally, we have developed a software to analyse the image obtained by the radiographic equipment with digital processing techniques, as edge detector, morphological operators, statistical test for the detected combinations‥ The design of these phantoms let the evaluation of a wide range of operating conditions of voltage, current and time of the digital equipments. Moreover, the image quality analysis by the automatic software, let study it with objective parameters.

  8. Acne image analysis: lesion localization and classification

    NASA Astrophysics Data System (ADS)

    Abas, Fazly Salleh; Kaffenberger, Benjamin; Bikowski, Joseph; Gurcan, Metin N.

    2016-03-01

    Acne is a common skin condition present predominantly in the adolescent population, but may continue into adulthood. Scarring occurs commonly as a sequel to severe inflammatory acne. The presence of acne and resultant scars are more than cosmetic, with a significant potential to alter quality of life and even job prospects. The psychosocial effects of acne and scars can be disturbing and may be a risk factor for serious psychological concerns. Treatment efficacy is generally determined based on an invalidated gestalt by the physician and patient. However, the validated assessment of acne can be challenging and time consuming. Acne can be classified into several morphologies including closed comedones (whiteheads), open comedones (blackheads), papules, pustules, cysts (nodules) and scars. For a validated assessment, the different morphologies need to be counted independently, a method that is far too time consuming considering the limited time available for a consultation. However, it is practical to record and analyze images since dermatologists can validate the severity of acne within seconds after uploading an image. This paper covers the processes of region-ofinterest determination using entropy-based filtering and thresholding as well acne lesion feature extraction. Feature extraction methods using discrete wavelet frames and gray-level co-occurence matrix were presented and their effectiveness in separating the six major acne lesion classes were discussed. Several classifiers were used to test the extracted features. Correct classification accuracy as high as 85.5% was achieved using the binary classification tree with fourteen principle components used as descriptors. Further studies are underway to further improve the algorithm performance and validate it on a larger database.

  9. Subcellular chemical and morphological analysis by stimulated Raman scattering microscopy and image analysis techniques

    PubMed Central

    D’Arco, Annalisa; Brancati, Nadia; Ferrara, Maria Antonietta; Indolfi, Maurizio; Frucci, Maria; Sirleto, Luigi

    2016-01-01

    The visualization of heterogeneous morphology, segmentation and quantification of image features is a crucial point for nonlinear optics microscopy applications, spanning from imaging of living cells or tissues to biomedical diagnostic. In this paper, a methodology combining stimulated Raman scattering microscopy and image analysis technique is presented. The basic idea is to join the potential of vibrational contrast of stimulated Raman scattering and the strength of imaging analysis technique in order to delineate subcellular morphology with chemical specificity. Validation tests on label free imaging of polystyrene-beads and of adipocyte cells are reported and discussed. PMID:27231626

  10. Comparative analysis of NDE techniques with image processing

    NASA Astrophysics Data System (ADS)

    Rathod, Vijay R.; Anand, R. S.; Ashok, Alaknanda

    2012-12-01

    The paper reports comparative results of nondestructive testing (NDT) based experimentation done on created flaws in the casting at the Central Foundry Forge Plant (CFFP) of Bharat Heavy Electrical Ltd. India (BHEL). The present experimental study is aimed at comparing the evaluation of image processing methods applied on the radiographic images of welding defects such as slag inclusion, porosity, lack-of-root penetration and cracks with other NDT methods. Different image segmentation techniques have been proposed here for identifying the above created welding defects. Currently, there is a large amount of research work going on in the field of automated system for inspection, analysis and detection of flaws in the weldments. Comparison of other NDT methods and application of image processing on the radiographic images of weld defects are aimed to detect defects reliably and to make accept/reject decisions as per the international standard.

  11. Low-level processing for real-time image analysis

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  12. Classification of Korla fragrant pears using NIR hyperspectral imaging analysis

    NASA Astrophysics Data System (ADS)

    Rao, Xiuqin; Yang, Chun-Chieh; Ying, Yibin; Kim, Moon S.; Chao, Kuanglin

    2012-05-01

    Korla fragrant pears are small oval pears characterized by light green skin, crisp texture, and a pleasant perfume for which they are named. Anatomically, the calyx of a fragrant pear may be either persistent or deciduous; the deciduouscalyx fruits are considered more desirable due to taste and texture attributes. Chinese packaging standards require that packed cases of fragrant pears contain 5% or less of the persistent-calyx type. Near-infrared hyperspectral imaging was investigated as a potential means for automated sorting of pears according to calyx type. Hyperspectral images spanning the 992-1681 nm region were acquired using an EMCCD-based laboratory line-scan imaging system. Analysis of the hyperspectral images was performed to select wavebands useful for identifying persistent-calyx fruits and for identifying deciduous-calyx fruits. Based on the selected wavebands, an image-processing algorithm was developed that targets automated classification of Korla fragrant pears into the two categories for packaging purposes.

  13. Applications of independent component analysis in SAR images

    NASA Astrophysics Data System (ADS)

    Huang, Shiqi; Cai, Xinhua; Hui, Weihua; Xu, Ping

    2009-07-01

    The detection of faint, small and hidden targets in synthetic aperture radar (SAR) image is still an issue for automatic target recognition (ATR) system. How to effectively separate these targets from the complex background is the aim of this paper. Independent component analysis (ICA) theory can enhance SAR image targets and improve signal clutter ratio (SCR), which benefits to detect and recognize faint targets. Therefore, this paper proposes a new SAR image target detection algorithm based on ICA. In experimental process, the fast ICA (FICA) algorithm is utilized. Finally, some real SAR image data is used to test the method. The experimental results verify that the algorithm is feasible, and it can improve the SCR of SAR image and increase the detection rate for the faint small targets.

  14. Image analysis of ocular fundus for retinopathy characterization

    SciTech Connect

    Ushizima, Daniela; Cuadros, Jorge

    2010-02-05

    Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for image enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.

  15. Quantifying fungal infection of plant leaves by digital image analysis using Scion Image software.

    PubMed

    Wijekoon, C P; Goodwin, P H; Hsiang, T

    2008-08-01

    A digital image analysis method previously used to evaluate leaf color changes due to nutritional changes was modified to measure the severity of several foliar fungal diseases. Images captured with a flatbed scanner or digital camera were analyzed with a freely available software package, Scion Image, to measure changes in leaf color caused by fungal sporulation or tissue damage. High correlations were observed between the percent diseased leaf area estimated by Scion Image analysis and the percent diseased leaf area from leaf drawings. These drawings of various foliar diseases came from a disease key previously developed to aid in visual estimation of disease severity. For leaves of Nicotiana benthamiana inoculated with different spore concentrations of the anthracnose fungus Colletotrichum destructivum, a high correlation was found between the percent diseased tissue measured by Scion Image analysis and the number of leaf spots. The method was adapted to quantify percent diseased leaf area ranging from 0 to 90% for anthracnose of lily-of-the-valley, apple scab, powdery mildew of phlox and rust of golden rod. In some cases, the brightness and contrast of the images were adjusted and other modifications were made, but these were standardized for each disease. Detached leaves were used with the flatbed scanner, but a method using attached leaves with a digital camera was also developed to make serial measurements of individual leaves to quantify symptom progression. This was successfully applied to monitor anthracnose on N. benthamiana leaves. Digital image analysis using Scion Image software is a useful tool for quantifying a wide variety of fungal interactions with plant leaves.

  16. Issues in Quantitative Analysis of Ultraviolet Imager (UV) Data: Airglow

    NASA Technical Reports Server (NTRS)

    Germany, G. A.; Richards, P. G.; Spann, J. F.; Brittnacher, M. J.; Parks, G. K.

    1999-01-01

    The GGS Ultraviolet Imager (UVI) has proven to be especially valuable in correlative substorm, auroral morphology, and extended statistical studies of the auroral regions. Such studies are based on knowledge of the location, spatial, and temporal behavior of auroral emissions. More quantitative studies, based on absolute radiometric intensities from UVI images, require a more intimate knowledge of the instrument behavior and data processing requirements and are inherently more difficult than studies based on relative knowledge of the oval location. In this study, UVI airglow observations are analyzed and compared with model predictions to illustrate issues that arise in quantitative analysis of UVI images. These issues include instrument calibration, long term changes in sensitivity, and imager flat field response as well as proper background correction. Airglow emissions are chosen for this study because of their relatively straightforward modeling requirements and because of their implications for thermospheric compositional studies. The analysis issues discussed here, however, are identical to those faced in quantitative auroral studies.

  17. A hyperspectral image analysis workbench for environmental science applications

    SciTech Connect

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-10-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or ``hyperspectral`` imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne`s Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image ``texture spectra`` derived from fractal signatures computed for subimage tiles at each wavelength.

  18. A hyperspectral image analysis workbench for environmental science applications

    SciTech Connect

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-01-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or hyperspectral'' imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne's Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image texture spectra'' derived from fractal signatures computed for subimage tiles at each wavelength.

  19. Quantitative analysis of synchrotron radiation intravenous angiographic images

    NASA Astrophysics Data System (ADS)

    Sarnelli, Anna; Nemoz, Christian; Elleaume, Hélène; Estève, François; Bertrand, Bernard; Bravin, Alberto

    2005-02-01

    A medical research protocol on clinical intravenous coronary angiography has been completed at the European Synchrotron Radiation Facility (ESRF) biomedical beamline. The aim was to investigate the accuracy of intravenous coronary angiography based on the K-edge digital subtraction technique for the detection of in-stent restenosis. For each patient, diagnosis has been performed on the synchrotron radiation images and monitored with the conventional selective coronary angiography method taken as the golden standard. In this paper, the methods of image processing and the results of the quantitative analysis are described. Image processing includes beam harmonic contamination correction, spatial deconvolution and the extraction of a 'contrast' and a 'tissue' image from each couple of radiograms simultaneously acquired at energies bracketing the K-edge of iodine. Quantitative analysis includes the estimation of the vessel diameter, the calculation of the absolute iodine concentration profiles along the coronary arteries and the stenosis degree measurement.

  20. Issues in Quantitative Analysis of Ultraviolet Imager (UV) Data: Airglow

    NASA Technical Reports Server (NTRS)

    Germany, G. A.; Richards, P. G.; Spann, J. F.; Brittnacher, M. J.; Parks, G. K.

    1999-01-01

    The GGS Ultraviolet Imager (UVI) has proven to be especially valuable in correlative substorm, auroral morphology, and extended statistical studies of the auroral regions. Such studies are based on knowledge of the location, spatial, and temporal behavior of auroral emissions. More quantitative studies, based on absolute radiometric intensities from UVI images, require a more intimate knowledge of the instrument behavior and data processing requirements and are inherently more difficult than studies based on relative knowledge of the oval location. In this study, UVI airglow observations are analyzed and compared with model predictions to illustrate issues that arise in quantitative analysis of UVI images. These issues include instrument calibration, long term changes in sensitivity, and imager flat field response as well as proper background correction. Airglow emissions are chosen for this study because of their relatively straightforward modeling requirements and because of their implications for thermospheric compositional studies. The analysis issues discussed here, however, are identical to those faced in quantitative auroral studies.

  1. Validating retinal fundus image analysis algorithms: issues and a proposal.

    PubMed

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; Al-Diri, Bashir; Cheung, Carol Y; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M; Jelinek, Herbert F; Meriaudeau, Fabrice; Quellec, Gwénolé; Macgillivray, Tom; Dhillon, Bal

    2013-05-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison.

  2. Flexibility analysis in adolescent idiopathic scoliosis on side-bending images using the EOS imaging system.

    PubMed

    Hirsch, C; Ilharreborde, B; Mazda, K

    2016-06-01

    Analysis of preoperative flexibility in adolescent idiopathic scoliosis (AIS) is essential to classify the curves, determine their structurality, and select the fusion levels during preoperative planning. Side-bending x-rays are the gold standard for the analysis of preoperative flexibility. The objective of this study was to examine the feasibility and performance of side-bending images taken in the standing position using the EOS imaging system. All patients who underwent preoperative assessment between April 2012 and January 2013 for AIS were prospectively included in the study. The work-up included standing AP and lateral EOS x-rays of the spine, standard side-bending x-rays in the supine position, and standing bending x-rays in the EOS booth. The irradiation dose was measured for each of the tests. Two-dimensional reducibility of the Cobb angle was measured on both types of bending x-rays. The results were based on the 50 patients in the study. No significant difference was demonstrated for reducibility of the Cobb angle between the standing side-bending images with the EOS imaging system and those in the supine position for all types of Lenke deformation. The irradiation dose was five times lower during the EOS bending imaging. The standing side-bending images in the EOS device contributed the same results as the supine images, with five times less irradiation. They should therefore be used in clinical routine. 2. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  3. Rapid enumeration of viable bacteria by image analysis

    NASA Technical Reports Server (NTRS)

    Singh, A.; Pyle, B. H.; McFeters, G. A.

    1989-01-01

    A direct viable counting method for enumerating viable bacteria was modified and made compatible with image analysis. A comparison was made between viable cell counts determined by the spread plate method and direct viable counts obtained using epifluorescence microscopy either manually or by automatic image analysis. Cultures of Escherichia coli, Salmonella typhimurium, Vibrio cholerae, Yersinia enterocolitica and Pseudomonas aeruginosa were incubated at 35 degrees C in a dilute nutrient medium containing nalidixic acid. Filtered samples were stained for epifluorescence microscopy and analysed manually as well as by image analysis. Cells enlarged after incubation were considered viable. The viable cell counts determined using image analysis were higher than those obtained by either the direct manual count of viable cells or spread plate methods. The volume of sample filtered or the number of cells in the original sample did not influence the efficiency of the method. However, the optimal concentration of nalidixic acid (2.5-20 micrograms ml-1) and length of incubation (4-8 h) varied with the culture tested. The results of this study showed that under optimal conditions, the modification of the direct viable count method in combination with image analysis microscopy provided an efficient and quantitative technique for counting viable bacteria in a short time.

  4. Rapid enumeration of viable bacteria by image analysis

    NASA Technical Reports Server (NTRS)

    Singh, A.; Pyle, B. H.; McFeters, G. A.

    1989-01-01

    A direct viable counting method for enumerating viable bacteria was modified and made compatible with image analysis. A comparison was made between viable cell counts determined by the spread plate method and direct viable counts obtained using epifluorescence microscopy either manually or by automatic image analysis. Cultures of Escherichia coli, Salmonella typhimurium, Vibrio cholerae, Yersinia enterocolitica and Pseudomonas aeruginosa were incubated at 35 degrees C in a dilute nutrient medium containing nalidixic acid. Filtered samples were stained for epifluorescence microscopy and analysed manually as well as by image analysis. Cells enlarged after incubation were considered viable. The viable cell counts determined using image analysis were higher than those obtained by either the direct manual count of viable cells or spread plate methods. The volume of sample filtered or the number of cells in the original sample did not influence the efficiency of the method. However, the optimal concentration of nalidixic acid (2.5-20 micrograms ml-1) and length of incubation (4-8 h) varied with the culture tested. The results of this study showed that under optimal conditions, the modification of the direct viable count method in combination with image analysis microscopy provided an efficient and quantitative technique for counting viable bacteria in a short time.

  5. Peripheral blood smear image analysis: A comprehensive review.

    PubMed

    Mohammed, Emad A; Mohamed, Mostafa M A; Far, Behrouz H; Naugler, Christopher

    2014-01-01

    Peripheral blood smear image examination is a part of the routine work of every laboratory. The manual examination of these images is tedious, time-consuming and suffers from interobserver variation. This has motivated researchers to develop different algorithms and methods to automate peripheral blood smear image analysis. Image analysis itself consists of a sequence of steps consisting of image segmentation, features extraction and selection and pattern classification. The image segmentation step addresses the problem of extraction of the object or region of interest from the complicated peripheral blood smear image. Support vector machine (SVM) and artificial neural networks (ANNs) are two common approaches to image segmentation. Features extraction and selection aims to derive descriptive characteristics of the extracted object, which are similar within the same object class and different between different objects. This will facilitate the last step of the image analysis process: pattern classification. The goal of pattern classification is to assign a class to the selected features from a group of known classes. There are two types of classifier learning algorithms: supervised and unsupervised. Supervised learning algorithms predict the class of the object under test using training data of known classes. The training data have a predefined label for every class and the learning algorithm can utilize this data to predict the class of a test object. Unsupervised learning algorithms use unlabeled training data and divide them into groups using similarity measurements. Unsupervised learning algorithms predict the group to which a new test object belong to, based on the training data without giving an explicit class to that object. ANN, SVM, decision tree and K-nearest neighbor are possible approaches to classification algorithms. Increased discrimination may be obtained by combining several classifiers together.

  6. A Software Package For Biomedical Image Processing And Analysis

    NASA Astrophysics Data System (ADS)

    Goncalves, Joao G. M.; Mealha, Oscar

    1988-06-01

    The decreasing cost of computing power and the introduction of low cost imaging boards justifies the increasing number of applications of digital image processing techniques in the area of biomedicine. There is however a large software gap to be fulfilled, between the application and the equipment. The requirements to bridge this gap are twofold: good knowledge of the hardware provided and its interface to the host computer, and expertise in digital image processing and analysis techniques. A software package incorporating these two requirements was developped using the C programming language, in order to create a user friendly image processing programming environment. The software package can be considered in two different ways: as a data structure adapted to image processing and analysis, which acts as the backbone and the standard of communication for all the software; and as a set of routines implementing the basic algorithms used in image processing and analysis. Hardware dependency is restricted to a single module upon which all hardware calls are based. The data structure that was built has four main features: hierchical, open, object oriented, and object dependent dimensions. Considering the vast amount of memory needed by imaging applications and the memory available in small imaging systems, an effective image memory management scheme was implemented. This software package is being used for more than one and a half years by users with different applications. It proved to be an efficient tool for helping people to get adapted into the system, and for standardizing and exchanging software, yet preserving flexibility allowing for users' specific implementations. The philosophy of the software package is discussed and the data structure that was built is described in detail.

  7. Uncontact Certification Using Video Hand Image by Morphology Analysis

    NASA Astrophysics Data System (ADS)

    Moritani, Motoki; Saitoh, Fumihiko

    This paper proposes a non-contacting certification system by using morphological analysis of contiguous hand images to access security control. The non-contacting hand image certification system is more effective than contacting system where psychological resistance and conformability are required. The morphology is applied to get useful individual characteristic even if the pose of a hand is changed. The experimental results show the more accuracy to certificate individuals was obtained by using contiguous frames compared to conventional method.

  8. Topographic slope correction for analysis of thermal infrared images

    NASA Technical Reports Server (NTRS)

    Watson, K. (Principal Investigator)

    1982-01-01

    A simple topographic slope correction using a linearized thermal model and assuming slopes less than about 20 degrees is presented. The correction can be used to analyzed individual thermal images or composite products such as temperature difference or thermal inertia. Simple curves are provided for latitudes of 30 and 50 degrees. The form is easily adapted for analysis of HCMM images using the DMA digital terrain data.

  9. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  10. A comparative image analysis of radial Fourier-Chebyshev moments

    NASA Astrophysics Data System (ADS)

    Li, Bo

    2017-08-01

    On the basis of the discrete Fourier functions and the discrete Chebyshev polynomials, a new set of radial orthogonal moment functions were presented. The new moments construct a new discrete orthogonal plane, and take a new sampling method that overcomes the default of classical method, which can be effectively used in the image analysis. The experimental results show that the new radial moments are superior to the conventional moments in image reconstruction and computing efficiency.

  11. Image sequence analysis and face feature extraction

    NASA Astrophysics Data System (ADS)

    Ravaut, Frederic; Stamon, Georges

    1997-04-01

    Based on the hypothesis of a one-to-one relationship between the external symptoms of epileptic fits and the abnormal cerebral functioning which causes it, the computerized study of epileptic fit video tapes brings new information on abnormal neuron activity. This insight will improve specialist's analysis in their diagnoses.

  12. MIXING QUANTIFICATION BY VISUAL IMAGING ANALYSIS

    EPA Science Inventory

    This paper reports on development of a method for quantifying two measures of mixing, the scale and intensity of segregation, through flow visualization, video recording, and software analysis. This non-intrusive method analyzes a planar cross section of a flowing system from an ...

  13. MIXING QUANTIFICATION BY VISUAL IMAGING ANALYSIS

    EPA Science Inventory

    This paper reports on development of a method for quantifying two measures of mixing, the scale and intensity of segregation, through flow visualization, video recording, and software analysis. This non-intrusive method analyzes a planar cross section of a flowing system from an ...

  14. Semi-supervised Cluster Analysis of Imaging Data

    PubMed Central

    Filipovych, Roman; Resnick, Susan M.; Davatzikos, Christos

    2010-01-01

    In this paper, we present a semi-supervised clustering-based framework for discovering coherent subpopulations in heterogeneous image sets. Our approach involves limited supervision in the form of labeled instances from two distributions that reflect a rough guess about subspace of features that are relevant for cluster analysis. By assuming that images are defined in a common space via registration to a common template, we propose a segmentation-based method for detecting locations that signify local regional differences in the two labeled sets. A PCA model of local image appearance is then estimated at each location of interest, and ranked with respect to its relevance for clustering. We develop an incremental k-means-like algorithm that discovers novel meaningful categories in a test image set. The application of our approach in this paper is in analysis of populations of healthy older adults. We validate our approach on a synthetic dataset, as well as on a dataset of brain images of older adults. We assess our method’s performance on the problem of discovering clusters of MR images of human brain, and present a cluster-based measure of pathology that reflects the deviation of a subject’s MR image from normal (i.e. cognitively stable) state. We analyze the clusters’ structure, and show that clustering results obtained using our approach correlate well with clinical data. PMID:20933091

  15. Texture analysis of high-resolution FLAIR images for TLE

    NASA Astrophysics Data System (ADS)

    Jafari-Khouzani, Kourosh; Soltanian-Zadeh, Hamid; Elisevich, Kost

    2005-04-01

    This paper presents a study of the texture information of high-resolution FLAIR images of the brain with the aim of determining the abnormality and consequently the candidacy of the hippocampus for temporal lobe epilepsy (TLE) surgery. Intensity and volume features of the hippocampus from FLAIR images of the brain have been previously shown to be useful in detecting the abnormal hippocampus in TLE. However, the small size of the hippocampus may limit the texture information. High-resolution FLAIR images show more details of the abnormal intensity variations of the hippocampi and therefore are more suitable for texture analysis. We study and compare the low and high-resolution FLAIR images of six epileptic patients. The hippocampi are segmented manually by an expert from T1-weighted MR images. Then the segmented regions are mapped on the corresponding FLAIR images for texture analysis. The 2-D wavelet transforms of the hippocampi are employed for feature extraction. We compare the ability of the texture features from regular and high-resolution FLAIR images to distinguish normal and abnormal hippocampi. Intracranial EEG results as well as surgery outcome are used as gold standard. The results show that the intensity variations of the hippocampus are related to the abnormalities in the TLE.

  16. Evaluation of stereoscopic 3D displays for image analysis tasks

    NASA Astrophysics Data System (ADS)

    Peinsipp-Byma, E.; Rehfeld, N.; Eck, R.

    2009-02-01

    In many application domains the analysis of aerial or satellite images plays an important role. The use of stereoscopic display technologies can enhance the image analyst's ability to detect or to identify certain objects of interest, which results in a higher performance. Changing image acquisition from analog to digital techniques entailed the change of stereoscopic visualisation techniques. Recently different kinds of digital stereoscopic display techniques with affordable prices have appeared on the market. At Fraunhofer IITB usability tests were carried out to find out (1) with which kind of these commercially available stereoscopic display techniques image analysts achieve the best performance and (2) which of these techniques achieve a high acceptance. First, image analysts were interviewed to define typical image analysis tasks which were expected to be solved with a higher performance using stereoscopic display techniques. Next, observer experiments were carried out whereby image analysts had to solve defined tasks with different visualization techniques. Based on the experimental results (performance parameters and qualitative subjective evaluations of the used display techniques) two of the examined stereoscopic display technologies were found to be very good and appropriate.

  17. Measurements and analysis of active/passive multispectral imaging

    NASA Astrophysics Data System (ADS)

    Grönwall, Christina; Hamoir, Dominique; Steinvall, Ove; Larsson, Hâkan; Amselem, Elias; Lutzmann, Peter; Repasi, Endre; Göhler, Benjamin; Barbé, Stéphane; Vaudelin, Olivier; Fracès, Michel; Tanguy, Bernard; Thouin, Emmanuelle

    2013-10-01

    This paper describes a data collection on passive and active imaging and the preliminary analysis. It is part of an ongoing work on active and passive imaging for target identification using different wavelength bands. We focus on data collection at NIR-SWIR wavelengths but we also include the visible and the thermal region. Active imaging in NIRSWIR will support the passive imaging by eliminating shadows during day-time and allow night operation. Among the applications that are most likely for active multispectral imaging, we focus on long range human target identification. We also study the combination of active and passive sensing. The target scenarios of interest include persons carrying different objects and their associated activities. We investigated laser imaging for target detection and classification up to 1 km assuming that another cueing sensor - passive EO and/or radar - is available for target acquisition and detection. Broadband or multispectral operation will reduce the effects of target speckle and atmospheric turbulence. Longer wavelengths will improve performance in low visibility conditions due to haze, clouds and fog. We are currently performing indoor and outdoor tests to further investigate the target/background phenomena that are emphasized in these wavelengths. We also investigate how these effects can be used for target identification and image fusion. Performed field tests and the results of preliminary data analysis are reported.

  18. Quantitative analysis of single-molecule superresolution images

    PubMed Central

    Coltharp, Carla; Yang, Xinxing; Xiao, Jie

    2014-01-01

    This review highlights the quantitative capabilities of single-molecule localization-based superresolution imaging methods. In addition to revealing fine structural details, the molecule coordinate lists generated by these methods provide the critical ability to quantify the number, clustering, and colocalization of molecules with 10 – 50 nm resolution. Here we describe typical workflows and precautions for quantitative analysis of single-molecule superresolution images. These guidelines include potential pitfalls and essential control experiments, allowing critical assessment and interpretation of superresolution images. PMID:25179006

  19. Technical considerations for functional magnetic resonance imaging analysis.

    PubMed

    Conklin, Chris J; Faro, Scott H; Mohamed, Feroze B

    2014-11-01

    Clinical application of functional magnetic resonance imaging (fMRI) based on blood oxygenation level-dependent (BOLD) effect has increased over the past decade because of its ability to map regional blood flow in response to brain stimulation. This mapping is primarily achieved by exploiting the BOLD effect precipitated by changes in the magnetic properties of hemoglobin. BOLD fMRI has utility in neurosurgical planning and mapping neuronal functional connectivity. Conventional echo planar imaging techniques are used to acquire stimulus-driven fMR imaging BOLD data. This article highlights technical aspects of fMRI data analysis to make it more accessible in clinical settings.

  20. Theoretical analysis of quantum ghost imaging through turbulence

    SciTech Connect

    Chan, Kam Wai Clifford; Simon, D. S.; Sergienko, A. V.; Hardy, Nicholas D.; Shapiro, Jeffrey H.; Dixon, P. Ben; Howland, Gregory A.; Howell, John C.; Eberly, Joseph H.; O'Sullivan, Malcolm N.; Rodenburg, Brandon; Boyd, Robert W.

    2011-10-15

    Atmospheric turbulence generally affects the resolution and visibility of an image in long-distance imaging. In a recent quantum ghost imaging experiment [P. B. Dixon et al., Phys. Rev. A 83, 051803 (2011)], it was found that the effect of the turbulence can nevertheless be mitigated under certain conditions. This paper gives a detailed theoretical analysis to the setup and results reported in the experiment. Entangled photons with a finite correlation area and a turbulence model beyond the phase screen approximation are considered.

  1. Multiphoton autofluorescence spectral analysis for fungus imaging and identification

    NASA Astrophysics Data System (ADS)

    Lin, Sung-Jan; Tan, Hsin-Yuan; Kuo, Chien-Jui; Wu, Ruei-Jr; Wang, Shiou-Han; Chen, Wei-Liang; Jee, Shiou-Hwa; Dong, Chen-Yuan

    2009-07-01

    We performed multiphoton imaging on fungi of medical significance. Fungal hyphae and spores of Aspergillus flavus, Micosporum gypseum, Micosoprum canis, Trichophyton rubrum, and Trichophyton tonsurans were found to be strongly autofluorescent but generate less prominent second harmonic signal. The cell wall and septum of fungal hyphae can be easily identified by autofluorescence imaging. We found that fungi of various species have distinct autofluorescence characteristics. Our result shows that the combination of multiphoton imaging and spectral analysis can be used to visualize and identify fungal species. This approach may be developed into an effective diagnostic tool for fungal identification.

  2. Proceedings of the Airborne Imaging Spectrometer Data Analysis Workshop

    NASA Technical Reports Server (NTRS)

    Vane, G. (Editor); Goetz, A. F. H. (Editor)

    1985-01-01

    The Airborne Imaging Spectrometer (AIS) Data Analysis Workshop was held at the Jet Propulsion Laboratory on April 8 to 10, 1985. It was attended by 92 people who heard reports on 30 investigations currently under way using AIS data that have been collected over the past two years. Written summaries of 27 of the presentations are in these Proceedings. Many of the results presented at the Workshop are preliminary because most investigators have been working with this fundamentally new type of data for only a relatively short time. Nevertheless, several conclusions can be drawn from the Workshop presentations concerning the value of imaging spectrometry to Earth remote sensing. First, work with AIS has shown that direct identification of minerals through high spectral resolution imaging is a reality for a wide range of materials and geological settings. Second, there are strong indications that high spectral resolution remote sensing will enhance the ability to map vegetation species. There are also good indications that imaging spectrometry will be useful for biochemical studies of vegetation. Finally, there are a number of new data analysis techniques under development which should lead to more efficient and complete information extraction from imaging spectrometer data. The results of the Workshop indicate that as experience is gained with this new class of data, and as new analysis methodologies are developed and applied, the value of imaging spectrometry should increase.

  3. Automatic quantitative analysis of cardiac MR perfusion images

    NASA Astrophysics Data System (ADS)

    Breeuwer, Marcel M.; Spreeuwers, Luuk J.; Quist, Marcel J.

    2001-07-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and accurate image analysis methods. This paper focuses on the evaluation of blood perfusion in the myocardium (the heart muscle) from MR images, using contrast-enhanced ECG-triggered MRI. We have developed an automatic quantitative analysis method, which works as follows. First, image registration is used to compensate for translation and rotation of the myocardium over time. Next, the boundaries of the myocardium are detected and for each position within the myocardium a time-intensity profile is constructed. The time interval during which the contrast agent passes for the first time through the left ventricle and the myocardium is detected and various parameters are measured from the time-intensity profiles in this interval. The measured parameters are visualized as color overlays on the original images. Analysis results are stored, so that they can later on be compared for different stress levels of the heart. The method is described in detail in this paper and preliminary validation results are presented.

  4. SIMA: Python software for analysis of dynamic fluorescence imaging data.

    PubMed

    Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.

  5. Image-based histologic grade estimation using stochastic geometry analysis

    NASA Astrophysics Data System (ADS)

    Petushi, Sokol; Zhang, Jasper; Milutinovic, Aladin; Breen, David E.; Garcia, Fernando U.

    2011-03-01

    Background: Low reproducibility of histologic grading of breast carcinoma due to its subjectivity has traditionally diminished the prognostic value of histologic breast cancer grading. The objective of this study is to assess the effectiveness and reproducibility of grading breast carcinomas with automated computer-based image processing that utilizes stochastic geometry shape analysis. Methods: We used histology images stained with Hematoxylin & Eosin (H&E) from invasive mammary carcinoma, no special type cases as a source domain and study environment. We developed a customized hybrid semi-automated segmentation algorithm to cluster the raw image data and reduce the image domain complexity to a binary representation with the foreground representing regions of high density of malignant cells. A second algorithm was developed to apply stochastic geometry and texture analysis measurements to the segmented images and to produce shape distributions, transforming the original color images into a histogram representation that captures their distinguishing properties between various histological grades. Results: Computational results were compared against known histological grades assigned by the pathologist. The Earth Mover's Distance (EMD) similarity metric and the K-Nearest Neighbors (KNN) classification algorithm provided correlations between the high-dimensional set of shape distributions and a priori known histological grades. Conclusion: Computational pattern analysis of histology shows promise as an effective software tool in breast cancer histological grading.

  6. Swarm optimization methods for cognitive image analysis

    NASA Astrophysics Data System (ADS)

    Owechko, Yuri; Medasani, Swarup

    2007-09-01

    We describe cognitive swarms, a new method for efficient visual recognition of objects in an image or video sequence that combines feature-based object classification with search mechanisms based on swarm intelligence. Our approach utilizes the particle swarm optimization algorithm (PSO), a population based evolutionary algorithm, which is effective for optimization of a wide range of functions. PSO searches a multi-dimensional solution space for a global optimum using a population or swarm of "particles" that cooperate using a low overhead communication scheme to search the solution space efficiently. We use a system of local and global swarms to detect and track multiple objects in video sequences. In our implementation, each particle in the swarm consists of a cascade of classifiers that utilize wavelet and edge-symmetry features to recognize objects. PSO update equations are used to control the movement of the swarm in solution space as the particles cooperate to find objects efficiently by maximizing classification confidence. By performing this optimization, the classifier swarm finds objects in the scene, determines their size, and optimizes other classifier parameters such as the object rotation angle. Map-based attention feedback is used to further increase the efficiency of cognitive swarms. Performance results are presented for human and vehicle detection.

  7. Nanobiodevices for Biomolecule Analysis and Imaging

    NASA Astrophysics Data System (ADS)

    Yasui, Takao; Kaji, Noritada; Baba, Yoshinobu

    2013-06-01

    Nanobiodevices have been developed to analyze biomolecules and cells for biomedical applications. In this review, we discuss several nanobiodevices used for disease-diagnostic devices, molecular imaging devices, regenerative medicine, and drug-delivery systems and describe the numerous advantages of nanobiodevices, especially in biological, medical, and clinical applications. This review also outlines the fabrication technologies for nanostructures and nanomaterials, including top-down nanofabrication and bottom-up molecular self-assembly approaches. We describe nanopillar arrays and nanowall arrays for the ultrafast separation of DNA or protein molecules and nanoball materials for the fast separation of a wide range of DNA molecules, and we present examples of applications of functionalized carbon nanotubes to obtain information about subcellular localization on the basis of mobility differences between free fluorophores and fluorophore-labeled carbon nanotubes. Finally, we discuss applications of newly synthesized quantum dots to the screening of small interfering RNA, highly sensitive detection of disease-related proteins, and development of cancer therapeutics and diagnostics.

  8. Fractal-based image texture analysis of trabecular bone architecture.

    PubMed

    Jiang, C; Pitt, R E; Bertram, J E; Aneshansley, D J

    1999-07-01

    Fractal-based image analysis methods are investigated to extract textural features related to the anisotropic structure of trabecular bone from the X-ray images of cubic bone specimens. Three methods are used to quantify image textural features: power spectrum, Minkowski dimension and mean intercept length. The global fractal dimension is used to describe the overall roughness of the image texture. The anisotropic features formed by the trabeculae are characterised by a fabric ellipse, whose orientation and eccentricity reflect the textural anisotropy of the image. Tests of these methods with synthetic images of known fractal dimension show that the Minkowski dimension provides a more accurate and consistent estimation of global fractal dimension. Tests on bone x-ray (eccentricity range 0.25-0.80) images indicate that the Minkowski dimension is more sensitive to the changes in textural orientation. The results suggest that the Minkowski dimension is a better measure for characterising trabecular bone anisotropy in the x-ray images of thick specimens.

  9. Automated spine and vertebrae detection in CT images using object-based image analysis.

    PubMed

    Schwier, M; Chitiboi, T; Hülnhagen, T; Hahn, H K

    2013-09-01

    Although computer assistance has become common in medical practice, some of the most challenging tasks that remain unsolved are in the area of automatic detection and recognition. The human visual perception is in general far superior to computer vision algorithms. Object-based image analysis is a relatively new approach that aims to lift image analysis from a pixel-based processing to a semantic region-based processing of images. It allows effective integration of reasoning processes and contextual concepts into the recognition method. In this paper, we present an approach that applies object-based image analysis to the task of detecting the spine in computed tomography images. A spine detection would be of great benefit in several contexts, from the automatic labeling of vertebrae to the assessment of spinal pathologies. We show with our approach how region-based features, contextual information and domain knowledge, especially concerning the typical shape and structure of the spine and its components, can be used effectively in the analysis process. The results of our approach are promising with a detection rate for vertebral bodies of 96% and a precision of 99%. We also gain a good two-dimensional segmentation of the spine along the more central slices and a coarse three-dimensional segmentation.

  10. Digital interactive image analysis by array processing

    NASA Technical Reports Server (NTRS)

    Sabels, B. E.; Jennings, J. D.

    1973-01-01

    An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

  11. Functional imaging of auditory scene analysis.

    PubMed

    Gutschalk, Alexander; Dykstra, Andrew R

    2014-01-01

    Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging.

  12. Studying developmental variation with Geometric Morphometric Image Analysis (GMIA).

    PubMed

    Mayer, Christine; Metscher, Brian D; Müller, Gerd B; Mitteroecker, Philipp

    2014-01-01

    The ways in which embryo development can vary across individuals of a population determine how genetic variation translates into adult phenotypic variation. The study of developmental variation has been hampered by the lack of quantitative methods for the joint analysis of embryo shape and the spatial distribution of cellular activity within the developing embryo geometry. By drawing from the strength of geometric morphometrics and pixel/voxel-based image analysis, we present a new approach for the biometric analysis of two-dimensional and three-dimensional embryonic images. Well-differentiated structures are described in terms of their shape, whereas structures with diffuse boundaries, such as emerging cell condensations or molecular gradients, are described as spatial patterns of intensities. We applied this approach to microscopic images of the tail fins of larval and juvenile rainbow trout. Inter-individual variation of shape and cell density was found highly spatially structured across the tail fin and temporally dynamic throughout the investigated period.

  13. Bayesian principal geodesic analysis for estimating intrinsic diffeomorphic image variability.

    PubMed

    Zhang, Miaomiao; Fletcher, P Thomas

    2015-10-01

    In this paper, we present a generative Bayesian approach for estimating the low-dimensional latent space of diffeomorphic shape variability in a population of images. We develop a latent variable model for principal geodesic analysis (PGA) that provides a probabilistic framework for factor analysis in the space of diffeomorphisms. A sparsity prior in the model results in automatic selection of the number of relevant dimensions by driving unnecessary principal geodesics to zero. To infer model parameters, including the image atlas, principal geodesic deformations, and the effective dimensionality, we introduce an expectation maximization (EM) algorithm. We evaluate our proposed model on 2D synthetic data and the 3D OASIS brain database of magnetic resonance images, and show that the automatically selected latent dimensions from our model are able to reconstruct unobserved testing images with lower error than both linear principal component analysis (LPCA) in the image space and tangent space principal component analysis (TPCA) in the diffeomorphism space. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  15. Photoacoustic Image Analysis for Cancer Detection and Building a Novel Ultrasound Imaging System

    NASA Astrophysics Data System (ADS)

    Sinha, Saugata

    Photoacoustic (PA) imaging is a rapidly emerging non-invasive soft tissue imaging modality which has the potential to detect tissue abnormality at early stage. Photoacoustic images map the spatially varying optical absorption property of tissue. In multiwavelength photoacoustic imaging, the soft tissue is imaged with different wavelengths, tuned to the absorption peaks of the specific light absorbing tissue constituents or chromophores to obtain images with different contrasts of the same tissue sample. From those images, spatially varying concentration of the chromophores can be recovered. As multiwavelength PA images can provide important physiological information related to function and molecular composition of the tissue, so they can be used for diagnosis of cancer lesions and differentiation of malignant tumors from benign tumors. In this research, a number of parameters have been extracted from multiwavelength 3D PA images of freshly excised human prostate and thyroid specimens, imaged at five different wavelengths. Using marked histology slides as ground truths, region of interests (ROI) corresponding to cancer, benign and normal regions have been identified in the PA images. The extracted parameters belong to different categories namely chromophore concentration, frequency parameters and PA image pixels and they represent different physiological and optical properties of the tissue specimens. Statistical analysis has been performed to test whether the extracted parameters are significantly different between cancer, benign and normal regions. A multidimensional [29 dimensional] feature set, built with the extracted parameters from the 3D PA images, has been divided randomly into training and testing sets. The training set has been used to train support vector machine (SVM) and neural network (NN) classifiers while the performance of the classifiers in differentiating different tissue pathologies have been determined by the testing dataset. Using the NN

  16. ASTER Imaging and Analysis of Glacier Hazards

    NASA Astrophysics Data System (ADS)

    Kargel, Jeffrey; Furfaro, Roberto; Kaser, Georg; Leonard, Gregory; Fink, Wolfgang; Huggel, Christian; Kääb, Andreas; Raup, Bruce; Reynolds, John; Wolfe, David; Zapata, Marco

    Most scientific attention to glaciers, including ASTER and other satellite-derived applications in glacier science, pertains to their roles in the following seven functions: (1) as signposts of climate change (Kaser et al. 1990; Williams and Ferrigno 1999, 2002; Williams et al. 2008; Kargel et al. 2005; Oerlemans 2005), (2) as natural reservoirs of fresh water (Yamada and Motoyama 1988; Yang and Hu 1992; Shiyin et al. 2003; Juen et al. 2007), (3) as contributors to sea-level change (Arendt et al. 2002), (4) as sources of hydropower (Reynolds 1993); much work also relates to the basic science of glaciology, especially (5) the physical phenomeno­logy of glacier flow processes and glacier change (DeAngelis and Skvarca 2003; Berthier et al. 2007; Rivera et al. 2007), (6) glacial geomorphology (Bishop et al. 1999, 2003), and (7) the technology required to acquire and analyze satellite images of glaciers (Bishop et al. 1999, 2000, 2003, 2004; Quincey et al. 2005, 2007; Raup et al. 2000, 2006ab; Khalsa et al. 2004; Paul et al. 2004a, b). These seven functions define the important areas of glaciological science and technology, yet a more pressing issue in parts of the world is the direct danger to people and infrastructure posed by some glaciers (Trask 2005; Morales 1969; Lliboutry et al. 1977; Evans and Clague 1988; Xu and Feng 1989; Reynolds 1993, 1998, 1999; Yamada and Sharma 1993; Hastenrath and Ames 1995; Mool 1995; Ames 1998; Chikita et al. 1999; Williams and Ferrigno 1999; Richardson and Reynolds 2000a, b; Zapata 2002; Huggel et al. 2002, 2004; Xiangsong 1992; Kääb et al. 2003, 2005, 2005c; Salzmann et al. 2004; Noetzli et al. 2006).

  17. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    PubMed

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  18. Poster - Thur Eve - 39: SBRT imaging analysis - patient results and QA of imaging systems.

    PubMed

    Mason, D; Neath, C

    2012-07-01

    Our centre began offering stereotactic body radiation therapy (SBRT) treatments for peripheral lung lesions in 2011. As a high-precision technique, SBRT requires precise positioning of the target, and precise quality assurance (QA) of the imaging systems; these may be dependent on local equipment and procedures. We aimed to maintain target position within 3 mm throughout each treatment, and imaging and mechanical systems to at least 2 mm accuracy. A retrospective analysis was done of patient cone-beam (CB) data, and of our imaging system QA, to assess our spatial objectives and look for opportunities for improvement. The data indicated that, using our immobilization and imaging procedures, target position was maintained within 3 mm 96% of the time, and 75% within 2 mm, similar to results from other centres. Imaging system QA using the standard ball-bearing test showed system accuracy was maintained well within 1 mm. These results were compared with a simpler daily QA procedure using a Pentaguide phantom. The mean and standard deviation of the radial difference in the kV-MV isocenter coincidence for the two techniques was 0.62mm +/- 0.23mm. With appropriate choice of tolerance and action level, the morning QA was sufficient for identifying outliers requiring further investigation. This analysis gives us confidence in understanding the performance of our SBRT lung treatments, and gives baselines for analyzing changes to patient immobilization or imaging procedures. © 2012 American Association of Physicists in Medicine.

  19. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    PubMed Central

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  20. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis.

    PubMed

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis.

  1. Multispectral retinal image analysis: a novel non-invasive tool for retinal imaging

    PubMed Central

    Calcagni, A; Gibson, J M; Styles, I B; Claridge, E; Orihuela-Espina, F

    2011-01-01

    Purpose To develop a non-invasive method for quantification of blood and pigment distributions across the posterior pole of the fundus from multispectral images using a computer-generated reflectance model of the fundus. Methods A computer model was developed to simulate light interaction with the fundus at different wavelengths. The distribution of macular pigment (MP) and retinal haemoglobins in the fundus was obtained by comparing the model predictions with multispectral image data at each pixel. Fundus images were acquired from 16 healthy subjects from various ethnic backgrounds and parametric maps showing the distribution of MP and of retinal haemoglobins throughout the posterior pole were computed. Results The relative distributions of MP and retinal haemoglobins in the subjects were successfully derived from multispectral images acquired at wavelengths 507, 525, 552, 585, 596, and 611 nm, providing certain conditions were met and eye movement between exposures was minimal. Recovery of other fundus pigments was not feasible and further development of the imaging technique and refinement of the software are necessary to understand the full potential of multispectral retinal image analysis. Conclusion The distributions of MP and retinal haemoglobins obtained in this preliminary investigation are in good agreement with published data on normal subjects. The ongoing development of the imaging system should allow for absolute parameter values to be computed. A further study will investigate subjects with known pathologies to determine the effectiveness of the method as a screening and diagnostic tool. PMID:21904394

  2. Multispectral retinal image analysis: a novel non-invasive tool for retinal imaging.

    PubMed

    Calcagni, A; Gibson, J M; Styles, I B; Claridge, E; Orihuela-Espina, F

    2011-12-01

    To develop a non-invasive method for quantification of blood and pigment distributions across the posterior pole of the fundus from multispectral images using a computer-generated reflectance model of the fundus. A computer model was developed to simulate light interaction with the fundus at different wavelengths. The distribution of macular pigment (MP) and retinal haemoglobins in the fundus was obtained by comparing the model predictions with multispectral image data at each pixel. Fundus images were acquired from 16 healthy subjects from various ethnic backgrounds and parametric maps showing the distribution of MP and of retinal haemoglobins throughout the posterior pole were computed. The relative distributions of MP and retinal haemoglobins in the subjects were successfully derived from multispectral images acquired at wavelengths 507, 525, 552, 585, 596, and 611 nm, providing certain conditions were met and eye movement between exposures was minimal. Recovery of other fundus pigments was not feasible and further development of the imaging technique and refinement of the software are necessary to understand the full potential of multispectral retinal image analysis. The distributions of MP and retinal haemoglobins obtained in this preliminary investigation are in good agreement with published data on normal subjects. The ongoing development of the imaging system should allow for absolute parameter values to be computed. A further study will investigate subjects with known pathologies to determine the effectiveness of the method as a screening and diagnostic tool.

  3. Hyperspectral fluorescence imaging coupled with multivariate image analysis techniques for contaminant screening of leafy greens

    NASA Astrophysics Data System (ADS)

    Everard, Colm D.; Kim, Moon S.; Lee, Hoyoung

    2014-05-01

    The production of contaminant free fresh fruit and vegetables is needed to reduce foodborne illnesses and related costs. Leafy greens grown in the field can be susceptible to fecal matter contamination from uncontrolled livestock and wild animals entering the field. Pathogenic bacteria can be transferred via fecal matter and several outbreaks of E.coli O157:H7 have been associated with the consumption of leafy greens. This study examines the use of hyperspectral fluorescence imaging coupled with multivariate image analysis to detect fecal contamination on Spinach leaves (Spinacia oleracea). Hyperspectral fluorescence images from 464 to 800 nm were captured; ultraviolet excitation was supplied by two LED-based line light sources at 370 nm. Key wavelengths and algorithms useful for a contaminant screening optical imaging device were identified and developed, respectively. A non-invasive screening device has the potential to reduce the harmful consequences of foodborne illnesses.

  4. [Retinal image analysis to detect lesions associated with diabetic retinopathy].

    PubMed

    Sánchez Gutiérrez, C I; López Gálvez, M I; Hornero Sánchez, R; Poza Crespo, J

    2004-12-01

    Diabetic retinopathy is a leading cause of vision loss in developed countries. Regular diabetic retinal eye screenings are needed to detect early signs of retinopathy, so that appropriate treatments can be rendered to prevent blindness. Digital imaging is becoming available as a means of screening for diabetic retinopathy. However, with the large number of patients undergoing screenings, medical professionals require a tremendous amount of time and effort in order to analyse and diagnose the fundus photographs. Our aim is to develop an automatic algorithm using digital image analysis for detecting these early lesions from retinal images. An automatic method to detect hard exudates, a lesion associated with diabetic retinopathy, is proposed. The algorithm is based on their colour, using a statistical classification, and their sharp edges, applying an edge detector, to localise them. A sensitivity of 79.62% with a mean number of 3 false positives per image is obtained in a database of 20 retinal images with variable colour, brightness and quality. It can also be seen that the number of the false negative cases increases when the hard exudates were very close to the vessel tree. The long term goal of the project is to automate the screening for diabetic retinopathy with retinal images. We have described the preliminary development of a tool to provide automatic analysis of digital fundus photographs to localise hard exudates. Future work will address the issue of improving the obtained results and also will focus on detecting other lesions.

  5. Texture Analysis of Medical Images Using the Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Fernández, Margarita; Mavilio, Adriana

    2002-08-01

    Texture analysis of images can contribute to a better interpretation of medical images. This type of analysis provides not only qualitative but also quantitative information about tissue affection degree. In this work an algorithm is developed which uses the wavelet transform to carry out the supervised segmentation of echographic images corresponding to injured Achilles tendon of athletes. To construct the pattern, the image corresponding to healthy tendon tissue of the athlete, is taken as a reference based upon the duplicity of this structure. Texture features are calculated on the expansion wavelet coefficients of the images. The Mahalanobis distance between texture samples of the injured tissue and pattern texture is computed and used as the discriminating function. It is concluded that this distance, after appropriate medical calibrations, can offer quantitative information about the injury degree in every point along the damaged tissue. Further, its behavior along the segmented image can serve as a measure of the degree of change in tissue properties. The parameter, similarity degree, is defined and obtained by taking into account the correlation between distance histograms for the healthy tissue and the damaged one. It is also shown that this parameter, when properly calibrated, can offer a quantitative global evaluation of the state of the injured tissue.

  6. Mediman: Object oriented programming approach for medical image analysis

    SciTech Connect

    Coppens, A.; Sibomana, M.; Bol, A.; Michel, C. . Positron Tomography Lab.)

    1993-08-01

    Mediman is a new image analysis package which has been developed to analyze quantitatively Positron Emission Tomography (PET) data. It is object-oriented, written in C++ and its user interface is based on InterViews on top of which new classes have been added. Mediman accesses data using external data representation or import/export mechanism which avoids data duplication. Multimodality studies are organized in a simple database which includes images, headers, color tables, lists and objects of interest (OOI's) and history files. Stored color table parameters allow to focus directly on the interesting portion of the dynamic range. Lists allow to organize the study according to modality, acquisition protocol, time and spatial properties. OOI's (points, lines and regions) are stored in absolute 3-D coordinates allowing correlation with other co-registered imaging modalities such as MRI or SPECT. OOI's have visualization properties and are organized into groups. Quantitative ROI analysis of anatomic images consists of position, distance, volume calculation on selected OOI's. An image calculator is connected to mediman. Quantitation of metabolic images is performed via profiles, sectorization, time activity curves and kinetic modeling. Mediman is menu and mouse driven, macro-commands can be registered and replayed. Its interface is customizable through a configuration file. The benefit of the object-oriented approach are discussed from a development point of view.

  7. Cardiac imaging: working towards fully-automated machine analysis & interpretation.

    PubMed

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-03-01

    Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.

  8. Automated image-based phenotypic analysis in zebrafish embryos

    PubMed Central

    Vogt, Andreas; Cholewinski, Andrzej; Shen, Xiaoqiang; Nelson, Scott; Lazo, John S.; Tsang, Michael; Hukriede, Neil A.

    2009-01-01

    Presently, the zebrafish is the only vertebrate model compatible with contemporary paradigms of drug discovery. Zebrafish embryos are amenable to automation necessary for high-throughput chemical screens, and optical transparency makes them potentially suited for image-based screening. However, the lack of tools for automated analysis of complex images presents an obstacle to utilizing the zebrafish as a high-throughput screening model. We have developed an automated system for imaging and analyzing zebrafish embryos in multi-well plates regardless of embryo orientation and without user intervention. Images of fluorescent embryos were acquired on a high-content reader and analyzed using an artificial intelligence-based image analysis method termed Cognition Network Technology (CNT). CNT reliably detected transgenic fluorescent embryos (Tg(fli1:EGFP)y1) arrayed in 96-well plates and quantified intersegmental blood vessel development in embryos treated with small molecule inhibitors of anigiogenesis. The results demonstrate it is feasible to adapt image-based high-content screening methodology to measure complex whole organism phenotypes. PMID:19235725

  9. Cardiac imaging: working towards fully-automated machine analysis & interpretation

    PubMed Central

    Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido

    2017-01-01

    Introduction Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation. PMID:28277804

  10. Image classification based on scheme of principal node analysis

    NASA Astrophysics Data System (ADS)

    Yang, Feng; Ma, Zheng; Xie, Mei

    2016-11-01

    This paper presents a scheme of principal node analysis (PNA) with the aim to improve the representativeness of the learned codebook so as to enhance the classification rate of scene image. Original images are normalized into gray ones and the scale-invariant feature transform (SIFT) descriptors are extracted from each image in the preprocessing stage. Then, the PNA-based scheme is applied to the SIFT descriptors with iteration and selection algorithms. The principal nodes of each image are selected through spatial analysis of the SIFT descriptors with Manhattan distance (L1 norm) and Euclidean distance (L2 norm) in order to increase the representativeness of the codebook. With the purpose of evaluating the performance of our scheme, the feature vector of the image is calculated by two baseline methods after the codebook is constructed. The L1-PNA- and L2-PNA-based baseline methods are tested and compared with different scales of codebooks over three public scene image databases. The experimental results show the effectiveness of the proposed scheme of PNA with a higher categorization rate.

  11. Cascaded image analysis for dynamic crack detection in material testing

    NASA Astrophysics Data System (ADS)

    Hampel, U.; Maas, H.-G.

    Concrete probes in civil engineering material testing often show fissures or hairline-cracks. These cracks develop dynamically. Starting at a width of a few microns, they usually cannot be detected visually or in an image of a camera imaging the whole probe. Conventional image analysis techniques will detect fissures only if they show a width in the order of one pixel. To be able to detect and measure fissures with a width of a fraction of a pixel at an early stage of their development, a cascaded image analysis approach has been developed, implemented and tested. The basic idea of the approach is to detect discontinuities in dense surface deformation vector fields. These deformation vector fields between consecutive stereo image pairs, which are generated by cross correlation or least squares matching, show a precision in the order of 1/50 pixel. Hairline-cracks can be detected and measured by applying edge detection techniques such as a Sobel operator to the results of the image matching process. Cracks will show up as linear discontinuities in the deformation vector field and can be vectorized by edge chaining. In practical tests of the method, cracks with a width of 1/20 pixel could be detected, and their width could be determined at a precision of 1/50 pixel.

  12. Development of an image-analysis light-scattering technique

    NASA Astrophysics Data System (ADS)

    Algarni, Saad; Kashuri, Hektor; Iannacchione, Germano

    2013-03-01

    We describe the progress in developing a versatile image-analysis approach for a light-scattering experiment. Recent advances in image analysis algorithms, computational power, and CCD image capture has allowed for the complete digital recording of the scattering of coherent laser light by a wide variety of samples. This digital record can then yield both static and dynamic information about the scattering events. Our approach is described using a very simple and in-expensive experimental arrangement for liquid samples. Calibration experiments were performed on aqueous suspensions of latex spheres having 0.5 and 1.0 micrometer diameter for three concentrations of 2 X 10-6, 1 X 10-6, and 5 X 10-7 % w/w at room temperature. The resulting data span a wave-vector range of q = 102 to 105 cm-1 and time averages over 0.05 to 1200 sec. The static analysis yield particle sizes in good agreement with expectations and a simple dynamic analysis yields an estimate of the characteristic time scale of the particle dynamics. Further developments in image corrections (laser stability, vibration, curvature, etc.) as well as time auto-correlation analysis will also be discussed.

  13. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1983-01-01

    Analysis during the quarter was carried out on geometric, radiometric, and information content aspects of both MSS and thematic mapper (TM) data. Test sites in Webster County, Iowa and Chicago, IL., and near Joliet, IL were studied. Band to band registration was evaluated and TM Bands 5 and 7 were found to be approximately 0.5 pixel out of registration with 1,2,3,4, and the thermal was found to be misregistered by 4 30 m pixels to the east and 1 pixel south. Certain MSS bands indicated nominally .25 pixel misregistration. Radiometrically, some striping was observed in TM bands and significant oscillatory noise patterns exist in MSS data which is possibly due to jitter. Information content was compared before and after cubic convolution resampling and no differences were observed in statistics or separability of basic scene classes.

  14. LANDSAT 4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1983-01-01

    A comparative analysis of TM and MSS data was completed and the results indicate that there are half as many separable spectral classes in the MSS data than in TM. In addition, the minimum separability between classes was also much less in MSS data. Radiometric data quality was also investigated for the TM by computing power spectrum estimates for dark-level data from Lake Michigan. Two significant coherent noise frequencies were observed, one with a wavelength of 3.12 pixels and the other with a 17 pixel wavelength. The amplitude was small (nominally .6 digital count standard deviation) and the noise appears primarily in Bands 3 and 4. No significant levels were observed in other bands. Scan angle dependent brightness effects were also evaluated.

  15. Large-scale Biomedical Image Analysis in Grid Environments

    PubMed Central

    Kumar, Vijay S.; Rutt, Benjamin; Kurc, Tahsin; Catalyurek, Umit; Pan, Tony; Saltz, Joel; Chow, Sunny; Lamont, Stephan; Martone, Maryann

    2012-01-01

    Digital microscopy scanners are capable of capturing multi-Gigapixel images from single slides, thus producing images of sizes up to several tens of Gigabytes each, and a research study may have hundreds of slides from a specimen. The sheer size of the images and the complexity of image processing operations create roadblocks to effective integration of large-scale imaging data in research. This paper presents the application of a component-based Grid middleware system for processing extremely large images obtained from digital microscopy devices. We have developed parallel, out-of-core techniques for different classes of data processing operations commonly employed on images from confocal microscopy scanners. These techniques are combined into data pre-processing and analysis pipelines using the component-based middleware system. The experimental results show that 1) our implementation achieves good performance and can handle very large (terabyte-scale) datasets on high-performance Grid nodes, consisting of computation and/or storage clusters, and 2) it can take advantage of multiple Grid nodes connected over high-bandwidth wide-area networks by combining task- and data-parallelism. PMID:18348945

  16. Kinematic analysis of human walking gait using digital image processing.

    PubMed

    O'Malley, M; de Paor, D L

    1993-07-01

    A system using digital image processing techniques for kinematic analysis of human gait has been developed. The system is cheap, easy to use, automated and provides useful detailed quantitative information to the medical profession. Passive markers comprising black annuli on white card are placed on the anatomical landmarks of the subject. Digital images at the standard television rate of 25 per second are acquired of the subject walking past a white background. The images are obtained, stored and processed using standard commercially available hardware, i.e. video camera, video recorder, digital framestore and an IBM PC. Using a single-threshold grey level, all the images are thresholded to produce binary images. An automatic routine then uses a set of pattern recognition algorithms to locate accurately and consistently the markers in each image. The positions of the markers are analysed to determine to which anatomical landmark they correspond, and thus a stick diagram for each image is obtained. There is also a facility where the positions of the markers may be entered manually and errors corrected. The results may be presented in a variety of ways: stick diagram animation, sagittal displacement graphs, flexion diagrams and gait parameters.

  17. Hyperspectral imaging for non-contact analysis of forensic traces.

    PubMed

    Edelman, G J; Gaston, E; van Leeuwen, T G; Cullen, P J; Aalders, M C G

    2012-11-30

    Hyperspectral imaging (HSI) integrates conventional imaging and spectroscopy, to obtain both spatial and spectral information from a specimen. This technique enables investigators to analyze the chemical composition of traces and simultaneously visualize their spatial distribution. HSI offers significant potential for the detection, visualization, identification and age estimation of forensic traces. The rapid, non-destructive and non-contact features of HSI mark its suitability as an analytical tool for forensic science. This paper provides an overview of the principles, instrumentation and analytical techniques involved in hyperspectral imaging. We describe recent advances in HSI technology motivating forensic science applications, e.g. the development of portable and fast image acquisition systems. Reported forensic science applications are reviewed. Challenges are addressed, such as the analysis of traces on backgrounds encountered in casework, concluded by a summary of possible future applications.

  18. Lung nodules detection in chest radiography: image components analysis

    NASA Astrophysics Data System (ADS)

    Luo, Tao; Mou, Xuanqin; Yang, Ying; Yan, Hao

    2009-02-01

    We aimed to evaluate the effect of different components of chest image on performances of both human observer and channelized Fisher-Hotelling model (CFH) in nodule detection task. Irrelevant and relevant components were separated from clinical chest radiography by employing Principal Component Analysis (PCA) methods. Human observer performance was evaluated in two-alternative forced-choice (2AFC) on original clinical images and anatomical structure only images obtained by PCA methods. Channelized Fisher-Hotelling model with Laguerre-Gauss basis function was evaluated to predict human performance. We show that relevant component is the primary factor influencing on nodule detection in chest radiography. There is obvious difference of detectability between human observer and CFH model for nodule detection in images only containing anatomical structure. CFH model should be used more carefully.

  19. Geometric error analysis for shuttle imaging spectrometer experiment

    NASA Technical Reports Server (NTRS)

    Wang, S. J.; Ih, C. H.

    1984-01-01

    The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

  20. Classification of pollen species using autofluorescence image analysis.

    PubMed

    Mitsumoto, Kotaro; Yabusaki, Katsumi; Aoyagi, Hideki

    2009-01-01

    A new method to classify pollen species was developed by monitoring autofluorescence images of pollen grains. The pollens of nine species were selected, and their autofluorescence images were captured by a microscope equipped with a digital camera. The pollen size and the ratio of the blue to red pollen autofluorescence spectra (the B/R ratio) were calculated by image processing. The B/R ratios and pollen size varied among the species. Furthermore, the scatter-plot of pollen size versus the B/R ratio showed that pollen could be classified to the species level using both parameters. The pollen size and B/R ratio were confirmed by means of particle flow image analysis and the fluorescence spectra, respectively. These results suggest that a flow system capable of measuring both scattered light and the autofluorescence of particles could classify and count pollen grains in real time.

  1. Infrared medical image visualization and anomalies analysis method

    NASA Astrophysics Data System (ADS)

    Gong, Jing; Chen, Zhong; Fan, Jing; Yan, Liang

    2015-12-01

    Infrared medical examination finds the diseases through scanning the overall human body temperature and obtaining the temperature anomalies of the corresponding parts with the infrared thermal equipment. In order to obtain the temperature anomalies and disease parts, Infrared Medical Image Visualization and Anomalies Analysis Method is proposed in this paper. Firstly, visualize the original data into a single channel gray image: secondly, turn the normalized gray image into a pseudo color image; thirdly, a method of background segmentation is taken to filter out background noise; fourthly, cluster those special pixels with the breadth-first search algorithm; lastly, mark the regions of the temperature anomalies or disease parts. The test is shown that it's an efficient and accurate way to intuitively analyze and diagnose body disease parts through the temperature anomalies.

  2. Automated analysis of image mammogram for breast cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  3. Methods for spectral image analysis by exploiting spatial simplicity

    DOEpatents

    Keenan, Michael R.

    2010-05-25

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  4. Methods for spectral image analysis by exploiting spatial simplicity

    DOEpatents

    Keenan, Michael R.

    2010-11-23

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  5. Quantitative Medical Image Analysis for Clinical Development of Therapeutics

    NASA Astrophysics Data System (ADS)

    Analoui, Mostafa

    There has been significant progress in development of therapeutics for prevention and management of several disease areas in recent years, leading to increased average life expectancy, as well as of quality of life, globally. However, due to complexity of addressing a number of medical needs and financial burden of development of new class of therapeutics, there is a need for better tools for decision making and validation of efficacy and safety of new compounds. Numerous biological markers (biomarkers) have been proposed either as adjunct to current clinical endpoints or as surrogates. Imaging biomarkers are among rapidly increasing biomarkers, being examined to expedite effective and rational drug development. Clinical imaging often involves a complex set of multi-modality data sets that require rapid and objective analysis, independent of reviewer's bias and training. In this chapter, an overview of imaging biomarkers for drug development is offered, along with challenges that necessitate quantitative and objective image analysis. Examples of automated and semi-automated analysis approaches are provided, along with technical review of such methods. These examples include the use of 3D MRI for osteoarthritis, ultrasound vascular imaging, and dynamic contrast enhanced MRI for oncology. Additionally, a brief overview of regulatory requirements is discussed. In conclusion, this chapter highlights key challenges and future directions in this area.

  6. Computer-aided photometric analysis of dynamic digital bioluminescent images

    NASA Astrophysics Data System (ADS)

    Gorski, Zbigniew; Bembnista, T.; Floryszak-Wieczorek, J.; Domanski, Marek; Slawinski, Janusz

    2003-04-01

    The paper deals with photometric and morphologic analysis of bioluminescent images obtained by registration of light radiated directly from some plant objects. Registration of images obtained from ultra-weak light sources by the single photon counting (SPC) technique is the subject of this work. The radiation is registered by use of a 16-bit charge coupled device (CCD) camera "Night Owl" together with WinLight EG&G Berthold software. Additional application-specific software has been developed in order to deal with objects that are changing during the exposition time. Advantages of the elaborated set of easy configurable tools named FCT for a computer-aided photometric and morphologic analysis of numerous series of quantitatively imperfect chemiluminescent images are described. Instructions are given how to use these tools and exemplified with several algorithms for the transformation of images library. Using the proposed FCT set, automatic photometric and morphologic analysis of the information hidden within series of chemiluminescent images reflecting defensive processes in poinsettia (Euphorbia pulcherrima Willd) leaves affected by a pathogenic fungus Botrytis cinerea is revealed.

  7. [Two dimensional analysis of neural activities by calcium imaging].

    PubMed

    Kudo, Y

    1995-03-01

    The development of fluorescent Ca2+ indicators made us possible to measure the intracellular Ca2+ concentration easily. Furthermore the analysis of fluorescent images obtained by a fluorescence microscope became easy and popular, because of the tremendous development of computers and related hardwares. Time courses and topographical differences in Ca2+ concentration can be demonstrated as two dimensional color-coded images. Such images are sometimes quite persuasive. That is just "Seeing is believing". We can detect a completely new site of biological phenomena through this method. This article will describe many different types of Ca2+ indicators and applications to biological image analysis. Especially the method called "macro" image analysis can demonstrate the regional difference in changes of the Ca2+ concentration of slice preparation during medical treatment of cerebral ischemia. Already more than ten years have passed since the first demonstration of a fluorescent Ca2+ indicator, quin 2, and the methods using such Ca2+ indicators become important in the field of experimental biology.

  8. Multi-parametric imaging of cell heterogeneity in apoptosis analysis.

    PubMed

    Vorobjev, Ivan A; Barteneva, Natasha S

    2017-01-01

    Apoptosis is a multistep process of programmed cell death where different morphological and molecular events occur simultaneously and/or consequently. Recent progress in programmed cell death analysis uncovered large heterogeneity in response of individual cells to the apoptotic stimuli. Analysis of the complex and dynamic process of apoptosis requires a capacity to quantitate multiparametric data obtained from multicolor labeling and/or fluorescent reporters of live cells in conjunction with morphological analysis. Modern methods of multiparametric apoptosis study include but are not limited to fluorescent microscopy, flow cytometry and imaging flow cytometry. In the current review we discuss the image-based evaluation of apoptosis on the single-cell and population level by imaging flow cytometry in parallel with other techniques. The advantage of imaging flow cytometry is its ability to interrogate multiparametric morphometric and fluorescence quantitative data in statistically robust manner. Here we describe the current status and future perspectives of this emerging field, as well as some challenges and limitations. We also highlight a number of assays and multicolor labeling probes, utilizing both microscopy and different variants of imaging cytometry, including commonly based assays and novel developments in the field. Copyright © 2016. Published by Elsevier Inc.

  9. Failure Analysis of CCD Image Sensors Using SQUID and GMR Magnetic Current Imaging

    NASA Technical Reports Server (NTRS)

    Felt, Frederick S.

    2005-01-01

    During electrical testing of a Full Field CCD Image Senor, electrical shorts were detected on three of six devices. These failures occurred after the parts were soldered to the PCB. Failure analysis was performed to determine the cause and locations of these failures on the devices. After removing the fiber optic faceplate, optical inspection was performed on the CCDs to understand the design and package layout. Optical inspection revealed that the device had a light shield ringing the CCD array. This structure complicated the failure analysis. Alternate methods of analysis were considered, including liquid crystal, light and thermal emission, LT/A, TT/A SQUID, and MP. Of these, SQUID and MP techniques were pursued for further analysis. Also magnetoresistive current imaging technology is discussed and compared to SQUID.

  10. Aural analysis of image texture via cepstral filtering and sonification

    NASA Astrophysics Data System (ADS)

    Rangayyan, Rangaraj M.; Martins, Antonio C. G.; Ruschioni, Ruggero A.

    1996-03-01

    Texture plays an important role in image analysis and understanding, with many applications in medical imaging and computer vision. However, analysis of texture by image processing is a rather difficult issue, with most techniques being oriented towards statistical analysis which may not have readily comprehensible perceptual correlates. We propose new methods for auditory display (AD) and sonification of (quasi-) periodic texture (where a basic texture element or `texton' is repeated over the image field) and random texture (which could be modeled as filtered or `spot' noise). Although the AD designed is not intended to be speech- like or musical, we draw analogies between the two types of texture mentioned above and voiced/unvoiced speech, and design a sonification algorithm which incorporates physical and perceptual concepts of texture and speech. More specifically, we present a method for AD of texture where the projections of the image at various angles (Radon transforms or integrals) are mapped to audible signals and played in sequence. In the case of random texture, the spectral envelopes of the projections are related to the filter spot characteristics, and convey the essential information for texture discrimination. In the case of periodic texture, the AD provides timber and pitch related to the texton and periodicity. In another procedure for sonification of periodic texture, we propose to first deconvolve the image using cepstral analysis to extract information about the texton and horizontal and vertical periodicities. The projections of individual textons at various angles are used to create a voiced-speech-like signal with each projection mapped to a basic wavelet, the horizontal period to pitch, and the vertical period to rhythm on a longer time scale. The sound pattern then consists of a serial, melody-like sonification of the patterns for each projection. We believe that our approaches provide the much-desired `natural' connection between the image

  11. Multimodal facial color imaging modality for objective analysis of skin lesions

    PubMed Central

    Bae, Youngwoo; Nelson, J. Stuart; Jung, Byungjo

    2009-01-01

    We introduce a multimodal facial color imaging modality that provides a conventional color image, parallel and cross-polarization color images, and a fluorescent color image. We characterize the imaging modality and describe the image analysis methods for objective evaluation of skin lesions. The parallel and cross-polarization color images are useful for the analysis of skin texture, pigmentation, and vascularity. The polarization image, which is derived from parallel and cross-polarization color images, provides morphological information of superficial skin lesions. The fluorescent color image is useful for the evaluation of skin chromophores excited by UV-A radiation. In order to demonstrate the validity of the new imaging modality in dermatology, sample images were obtained from subjects with various skin disorders and image analysis methods were applied for objective evaluation of those lesions. In conclusion, we are confident that the imaging modality and analysis methods should be useful tools to simultaneously evaluate various skin lesions in dermatology. PMID:19123654

  12. Robust approach to ocular fundus image analysis

    NASA Astrophysics Data System (ADS)

    Tascini, Guido; Passerini, Giorgio; Puliti, Paolo; Zingaretti, Primo

    1993-07-01

    The analysis of morphological and structural modifications of retinal blood vessels plays an important role both to establish the presence of some systemic diseases as hypertension and diabetes and to study their course. The paper describes a robust set of techniques developed to quantitatively evaluate morphometric aspects of the ocular fundus vascular and micro vascular network. They are defined: (1) the concept of 'Local Direction of a vessel' (LD); (2) a special form of edge detection, named Signed Edge Detection (SED), which uses LD to choose the convolution kernel in the edge detection process and is able to distinguish between the left or the right vessel edge; (3) an iterative tracking (IT) method. The developed techniques use intensively both LD and SED in: (a) the automatic detection of number, position and size of blood vessels departing from the optical papilla; (b) the tracking of body and edges of the vessels; (c) the recognition of vessel branches and crossings; (d) the extraction of a set of features as blood vessel length and average diameter, arteries and arterioles tortuosity, crossing position and angle between two vessels. The algorithms, implemented in C language, have an execution time depending on the complexity of the currently processed vascular network.

  13. Secure thin client architecture for DICOM image analysis

    NASA Astrophysics Data System (ADS)

    Mogatala, Harsha V. R.; Gallet, Jacqueline

    2005-04-01

    This paper presents a concept of Secure Thin Client (STC) Architecture for Digital Imaging and Communications in Medicine (DICOM) image analysis over Internet. STC Architecture provides in-depth analysis and design of customized reports for DICOM images using drag-and-drop and data warehouse technology. Using a personal computer and a common set of browsing software, STC can be used for analyzing and reporting detailed patient information, type of examinations, date, Computer Tomography (CT) dose index, and other relevant information stored within the images header files as well as in the hospital databases. STC Architecture is three-tier architecture. The First-Tier consists of drag-and-drop web based interface and web server, which provides customized analysis and reporting ability to the users. The Second-Tier consists of an online analytical processing (OLAP) server and database system, which serves fast, real-time, aggregated multi-dimensional data using OLAP technology. The Third-Tier consists of a smart algorithm based software program which extracts DICOM tags from CT images in this particular application, irrespective of CT vendor's, and transfers these tags into a secure database system. This architecture provides Winnipeg Regional Health Authorities (WRHA) with quality indicators for CT examinations in the hospitals. It also provides health care professionals with analytical tool to optimize radiation dose and image quality parameters. The information is provided to the user by way of a secure socket layer (SSL) and role based security criteria over Internet. Although this particular application has been developed for WRHA, this paper also discusses the effort to extend the Architecture to other hospitals in the region. Any DICOM tag from any imaging modality could be tracked with this software.

  14. Objective quantification of plaque using digital image analysis.

    PubMed

    Sagel, P A; Lapujade, P G; Miller, J M; Sunberg, R J

    2000-01-01

    Dental plaque is the precursor to many oral diseases (e.g. gingivitis, periodontitis, caries) and thus its removal and control are an important aspect of oral hygiene. Many of the oral care products available today remove or inhibit the growth of dental plaque. Historically, the antiplaque efficacy of products was measured in blinded clinical trials where the amount of plaque on teeth is assessed via subjective visual grading with predefined scales such as the Turesky index. The ability of the examiner to consistently apply the index over time and the sensitivity of the scales often leads to large, expensive clinical trials. The present invention is an automatic measurement of plaque coverage on the facial surfaces of teeth using a digital image analysis technique. Dental plaque disclosed with fluorescein is digitally imaged under long-wave ultraviolet light. Ultraviolet illumination of fluorescein-disclosed plaque produces an image where the pixels of the image can be categorically classified based on color into one of five classes: teeth; plaque; gingiva; plaque on gingiva, or lip retractors. The amount of plaque on teeth can be determined by summation of the number of plaque pixels. The percent coverage is calculated from the number of plaque pixels and teeth pixels in the image. The digital image analysis of plaque allows facial plaque levels to be precisely measured (RSD = 3.77%). In application, the digital image analysis of plaque is capable of measuring highly significant plaque growth inhibition of a stannous fluoride dentifrice with as few as 10 subjects in a cross-over design.

  15. Traking of Laboratory Debris Flow Fronts with Image Analysis

    NASA Astrophysics Data System (ADS)

    Queiroz de Oliveira, Gustavo; Kulisch, Helmut; Fischer, Jan-Thomas; Scheidl, Christian; Pudasaini, Shiva P.

    2015-04-01

    Image analysis technique is applied to track the time evolution of rapid debris flow fronts and their velocities in laboratory experiments. These experiments are parts of the project avaflow.org that intends to develop a GIS-based open source computational tool to describe wide spectrum of rapid geophysical mass flows, including avalanches and real two-phase debris flows down complex natural slopes. The laboratory model consists of a large rectangular channel 1.4m wide and 10m long, with adjustable inclination and other flow configurations. The setup allows investigate different two phase material compositions including large fluid fractions. The large size enables to transfer the results to large-scale natural events providing increased measurement accuracy. The images are captured by a high speed camera, a standard digital camera. The fronts are tracked by the camera to obtain data in debris flow experiments. The reflectance analysis detects the debris front in every image frame; its presence changes the reflectance at a certain pixel location during the flow. The accuracy of the measurements was improved with a camera calibration procedure. As one of the great problems in imaging and analysis, the systematic distortions of the camera lens are contained in terms of radial and tangential parameters. The calibration procedure estimates the optimal values for these parameters. This allows us to obtain physically correct and undistorted image pixels. Then, we map the images onto a physical model geometry, which is the projective photogrammetry, in which the image coordinates are connected with the object space coordinates of the flow. Finally, the physical model geometry is rewritten in the direct linear transformation form, which allows for the conversion from one to another coordinate system. With our approach, the debris front position can then be estimated by combining the reflectance, calibration and the linear transformation. The consecutive debris front

  16. Non-contacting Hand Image Certification System Using Morphological Analysis

    NASA Astrophysics Data System (ADS)

    Moritani, Motoki; Saitoh, Fumihiko

    This paper proposes a non-contacting certification system by using morphological analysis of hand images to access security control. The non-contacting hand image certification system is more effective than contacting system where psychological resistance and conformability are required. The morphology is applied to get useful individual characteristic even if the pose of a hand is changed. First, a hand image is captured using the transmitted lighting. Next, the wrist area is removed from the hand area. The pattern spectrum that represents the form of the hand area is measured by the morphological analysis, and the spectrum is normalized to the invariant pattern to the scale change. Finally, the certification of an individual is performed by the neural network. The experimental results show that the sufficient accuracy to certificate individuals was obtained by the proposed system.

  17. Trabecular architecture analysis in femur radiographic images using fractals.

    PubMed

    Udhayakumar, G; Sujatha, C M; Ramakrishnan, S

    2013-04-01

    Trabecular bone is a highly complex anisotropic material that exhibits varying magnitudes of strength in compression and tension. Analysis of the trabecular architectural alteration that manifest as loss of trabecular plates and connection has been shown to yield better estimation of bone strength. In this work, an attempt has been made toward the development of an automated system for investigation of trabecular femur bone architecture using fractal analysis. Conventional radiographic femur bone images recorded using standard protocols are used in this study. The compressive and tensile regions in the images are delineated using preprocessing procedures. The delineated images are analyzed using Higuchi's fractal method to quantify pattern heterogeneity and anisotropy of trabecular bone structure. The results show that the extracted fractal features are distinct for compressive and tensile regions of normal and abnormal human femur bone. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  18. 3D quantitative analysis of brain SPECT images

    NASA Astrophysics Data System (ADS)

    Loncaric, Sven; Ceskovic, Ivan; Petrovic, Ratimir; Loncaric, Srecko

    2001-07-01

    The main purpose of this work is to develop a computer-based technique for quantitative analysis of 3-D brain images obtained by single photon emission computed tomography (SPECT). In particular, the volume and location of ischemic lesion and penumbra is important for early diagnosis and treatment of infracted regions of the brain. SPECT imaging is typically used as diagnostic tool to assess the size and location of the ischemic lesion. The segmentation method presented in this paper utilizes a 3-D deformable model in order to determine size and location of the regions of interest. The evolution of the model is computed using a level-set implementation of the algorithm. In addition to 3-D deformable model the method utilizes edge detection and region growing for realization of a pre-processing. Initial experimental results have shown that the method is useful for SPECT image analysis.

  19. Practical issues of hyperspectral imaging analysis of solid dosage forms.

    PubMed

    Amigo, José Manuel

    2010-09-01

    Hyperspectral imaging techniques have widely demonstrated their usefulness in different areas of interest in pharmaceutical research during the last decade. In particular, middle infrared, near infrared, and Raman methods have gained special relevance. This rapid increase has been promoted by the capability of hyperspectral techniques to provide robust and reliable chemical and spatial information on the distribution of components in pharmaceutical solid dosage forms. Furthermore, the valuable combination of hyperspectral imaging devices with adequate data processing techniques offers the perfect landscape for developing new methods for scanning and analyzing surfaces. Nevertheless, the instrumentation and subsequent data analysis are not exempt from issues that must be thoughtfully considered. This paper describes and discusses the main advantages and drawbacks of the measurements and data analysis of hyperspectral imaging techniques in the development of solid dosage forms.

  20. Statistical Analysis of speckle noise reduction techniques for echocardiographic Images

    NASA Astrophysics Data System (ADS)

    Saini, Kalpana; Dewal, M. L.; Rohit, Manojkumar

    2011-12-01

    Echocardiography is the safe, easy and fast technology for diagnosing the cardiac diseases. As in other ultrasound images these images also contain speckle noise. In some cases this speckle noise is useful such as in motion detection. But in general noise removal is required for better analysis of the image and proper diagnosis. Different Adaptive and anisotropic filters are included for statistical analysis. Statistical parameters such as Signal-to-Noise Ratio (SNR), Peak Signal-to-Noise Ratio (PSNR), and Root Mean Square Error (RMSE) calculated for performance measurement. One more important aspect that there may be blurring during speckle noise removal. So it is prefered that filter should be able to enhance edges during noise removal.

  1. Texture analysis and classification of ultrasound liver images.

    PubMed

    Gao, Shuang; Peng, Yuhua; Guo, Huizhi; Liu, Weifeng; Gao, Tianxin; Xu, Yuanqing; Tang, Xiaoying

    2014-01-01

    Ultrasound as a noninvasive imaging technique is widely used to diagnose liver diseases. Texture analysis and classification of ultrasound liver images have become an important research topic across the world. In this study, GLGCM (Gray Level Gradient Co-Occurrence Matrix) was implemented for texture analysis of ultrasound liver images first, followed by the use of GLCM (Gray Level Co-occurrence Matrix) at the second stage. Twenty two features were obtained using the two methods, and seven most powerful features were selected for classification using BP (Back Propagation) neural network. Fibrosis was divided into five stages (S0-S4) in this study. The classification accuracies of S0-S4 were 100%, 90%, 70%, 90% and 100%, respectively.

  2. Analysis of imaging for laser triangulation sensors under Scheimpflug rule.

    PubMed

    Miks, Antonin; Novak, Jiri; Novak, Pavel

    2013-07-29

    In this work a detailed analysis of the problem of imaging of objects lying in the plane tilted with respect to the optical axis of the rotationally symmetrical optical system is performed by means of geometrical optics theory. It is shown that the fulfillment of the so called Scheimpflug condition (Scheimpflug rule) does not guarantee the sharp image of the object as it is usually declared because of the fact that due to the dependence of aberrations of real optical systems on the object distance the image becomes blurred. The f-number of a given optical system also varies with the object distance. It is shown the influence of above mentioned effects on the accuracy of the laser triangulation sensors measurements. A detailed analysis of laser triangulation sensors, based on geometrical optics theory, is performed and relations for the calculation of measurement errors and construction parameters of laser triangulation sensors are derived.

  3. Stromatoporoid biometrics using image analysis software: A first order approach

    NASA Astrophysics Data System (ADS)

    Wolniewicz, Pawel

    2010-04-01

    Strommetric is a new image analysis computer program that performs morphometric measurements of stromatoporoid sponges. The program measures 15 features of skeletal elements (pillars and laminae) visible in both longitudinal and transverse thin sections. The software is implemented in C++, using the Open Computer Vision (OpenCV) library. The image analysis system distinguishes skeletal elements from sparry calcite using Otsu's method for image thresholding. More than 150 photos of thin sections were used as a test set, from which 36,159 measurements were obtained. The software provided about one hundred times more data than the current method applied until now. The data obtained are reproducible, even if the work is repeated by different workers. Thus the method makes the biometric studies of stromatoporoids objective.

  4. The medical analysis of child sexual abuse images.

    PubMed

    Cooper, Sharon W

    2011-11-01

    Analysis of child sexual abuse images, commonly referred to as pornography, requires a familiarity with the sexual maturation rating of children and an understanding of growth and development parameters. This article explains barriers that exist in working in this area of child abuse, the differences between subjective and objective analyses, methods used in working with this form of contraband, and recommendations that analysts document their findings in a format that allows for verbal descriptions of the images so that the content will be reflected in legal proceedings should there exist an aversion to visual review. Child sexual abuse images are a digital crime scene, and analysis requires a careful approach to assure that all victims may be identified.

  5. Imaging spectroscopic analysis at the Advanced Light Source

    SciTech Connect

    MacDowell, A. A.; Warwick, T.; Anders, S.; Lamble, G.M.; Martin, M.C.; McKinney, W.R.; Padmore, H.A.

    1999-05-12

    One of the major advances at the high brightness third generation synchrotrons is the dramatic improvement of imaging capability. There is a large multi-disciplinary effort underway at the ALS to develop imaging X-ray, UV and Infra-red spectroscopic analysis on a spatial scale from. a few microns to 10nm. These developments make use of light that varies in energy from 6meV to 15KeV. Imaging and spectroscopy are finding applications in surface science, bulk materials analysis, semiconductor structures, particulate contaminants, magnetic thin films, biology and environmental science. This article is an overview and status report from the developers of some of these techniques at the ALS. The following table lists all the currently available microscopes at the. ALS. This article will describe some of the microscopes and some of the early applications.

  6. Automated rice leaf disease detection using color image analysis

    NASA Astrophysics Data System (ADS)

    Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.

    2011-06-01

    In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.

  7. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  8. Diagnosis of cutaneous thermal burn injuries by multispectral imaging analysis

    NASA Technical Reports Server (NTRS)

    Anselmo, V. J.; Zawacki, B. E.

    1978-01-01

    Special photographic or television image analysis is shown to be a potentially useful technique to assist the physician in the early diagnosis of thermal burn injury. A background on the medical and physiological problems of burns is presented. The proposed methodology for burns diagnosis from both the theoretical and clinical points of view is discussed. The television/computer system constructed to accomplish this analysis is described, and the clinical results are discussed.

  9. The Medical Analysis of Child Sexual Abuse Images

    ERIC Educational Resources Information Center

    Cooper, Sharon W.

    2011-01-01

    Analysis of child sexual abuse images, commonly referred to as pornography, requires a familiarity with the sexual maturation rating of children and an understanding of growth and development parameters. This article explains barriers that exist in working in this area of child abuse, the differences between subjective and objective analyses,…

  10. The Medical Analysis of Child Sexual Abuse Images

    ERIC Educational Resources Information Center

    Cooper, Sharon W.

    2011-01-01

    Analysis of child sexual abuse images, commonly referred to as pornography, requires a familiarity with the sexual maturation rating of children and an understanding of growth and development parameters. This article explains barriers that exist in working in this area of child abuse, the differences between subjective and objective analyses,…

  11. Hierarchical Factoring Based On Image Analysis And Orthoblique Rotations.

    PubMed

    Stankov, L

    1979-07-01

    The procedure for hierarchical factoring suggested by Schmid and Leiman (1957) is applied within the framework of image analysis and orthoblique rotational procedures. It is shown that this approach necessarily leads to correlated higher order factors. Also, one can obtain a smaller number of factors than produced by typical hierarchical procedures.

  12. Evaluating wood failure in plywood shear by optical image analysis

    Treesearch

    Charles W. McMillin

    1984-01-01

    This exploratory study evaulates the potential of using an automatic image analysis method to measure percent wood failure in plywood shear specimens. The results suggest that this method my be as accurate as the visual method in tracking long-term gluebond quality. With further refinement, the method could lead to automated equipment replacing the subjective visual...

  13. Modulation of retinal image vasculature analysis to extend utility and provide secondary value from optical coherence tomography imaging

    PubMed Central

    Cameron, James R.; Ballerini, Lucia; Langan, Clare; Warren, Claire; Denholm, Nicholas; Smart, Katie; MacGillivray, Thomas J.

    2016-01-01

    Abstract. Retinal image analysis is emerging as a key source of biomarkers of chronic systemic conditions affecting the cardiovascular system and brain. The rapid development and increasing diversity of commercial retinal imaging systems present a challenge to image analysis software providers. In addition, clinicians are looking to extract maximum value from the clinical imaging taking place. We describe how existing and well-established retinal vasculature segmentation and measurement software for fundus camera images has been modulated to analyze scanning laser ophthalmoscope retinal images generated by the dual-modality Heidelberg SPECTRALIS® instrument, which also features optical coherence tomography. PMID:27175375

  14. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  15. Texture image analysis: application to the classification of bovine muscles from meat slice images

    NASA Astrophysics Data System (ADS)

    Basset, Olivier; Dupont, Florent; Hernandez, Ange; Odet, Christophe; Abouelkaram, Said; Culioli, Joseph

    1999-11-01

    Image texture is analyzed to provide a series of features for the classification of several sets of images. Images of meat slices are processed to classify various samples of bovine muscle as a function of three factors: animal age, muscle and castration. The different images present a particular texture that is global representation of the connective tissue. The aim of texture analysis is to extract specific features for each kind of meat. The meat slices available for this study came from 19 animals, including 10 castrated animals. Their ages were 4 months (10 animals), 12 months (5 animals) and 16 months (4 animals). The same three muscles were studied for each animal. The texture analysis was carried out on digitized images using the first- and second-order statistics of the gray levels and morphological parameters, for the characterization of the marbling. Two classification methods were implemented: the method of a k- nearest neighbors and a method based on neural networks. Both methods give comparable results and lead to satisfactory classification of the samples in relation to the three variation factors. The correlation of the textural features with chemical and mechanical parameters measured on the meat samples is also examined. Regression experiments show that textural features have potential to indicate meat characteristics.

  16. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    NASA Astrophysics Data System (ADS)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  17. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  18. Semivariogram Analysis of Bone Images Implemented on FPGA Architectures.

    PubMed

    Shirvaikar, Mukul; Lagadapati, Yamuna; Dong, Xuanliang

    2017-03-01

    Osteoporotic fractures are a major concern for the healthcare of elderly and female populations. Early diagnosis of patients with a high risk of osteoporotic fractures can be enhanced by introducing second-order statistical analysis of bone image data using techniques such as variogram analysis. Such analysis is computationally intensive thereby creating an impediment for introduction into imaging machines found in common clinical settings. This paper investigates the fast implementation of the semivariogram algorithm, which has been proven to be effective in modeling bone strength, and should be of interest to readers in the areas of computer-aided diagnosis and quantitative image analysis. The semivariogram is a statistical measure of the spatial distribution of data, and is based on Markov Random Fields (MRFs). Semivariogram analysis is a computationally intensive algorithm that has typically seen applications in the geosciences and remote sensing areas. Recently, applications in the area of medical imaging have been investigated, resulting in the need for efficient real time implementation of the algorithm. A semi-variance, γ(h), is defined as the half of the expected squared differences of pixel values between any two data locations with a lag distance of h. Due to the need to examine each pair of pixels in the image or sub-image being processed, the base algorithm complexity for an image window with n pixels is O (n(2)) Field Programmable Gate Arrays (FPGAs) are an attractive solution for such demanding applications due to their parallel processing capability. FPGAs also tend to operate at relatively modest clock rates measured in a few hundreds of megahertz. This paper presents a technique for the fast computation of the semivariogram using two custom FPGA architectures. A modular architecture approach is chosen to allow for replication of processing units. This allows for high throughput due to concurrent processing of pixel pairs. The current

  19. GANALYZER: A TOOL FOR AUTOMATIC GALAXY IMAGE ANALYSIS

    SciTech Connect

    Shamir, Lior

    2011-08-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze {approx}10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  20. Automatic Determination of Bacterioplankton Biomass by Image Analysis

    PubMed Central

    Bjørnsen, Peter Koefoed

    1986-01-01

    Image analysis was applied to epifluorescense microscopy of acridine orange-stained plankton samples. A program was developed for discrimination and binary segmentation of digitized video images, taken by an ultrasensitive video camera mounted on the microscope. Cell volumes were estimated from area and perimeter of the objects in the binary image. The program was tested on fluorescent latex beads of known diameters. Biovolumes measured by image analysis were compared with directly determined carbon biomasses in batch cultures of estuarine and freshwater bacterioplankton. This calibration revealed an empirical conversion factor from biovolume to biomass of 0.35 pg of C μm−3 (± 0.03 95% confidence limit). The deviation of this value from the normally used conversion factors of 0.086 to 0.121 pg of C μm−3 is discussed. The described system was capable of measuring 250 cells within 10 min, providing estimates of cell number, mean cell volume, and biovolume with a precision of 5%. Images PMID:16347077