Science.gov

Sample records for image analysis

  1. Retinal Imaging and Image Analysis

    PubMed Central

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

  2. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  3. Electronic image analysis

    NASA Astrophysics Data System (ADS)

    Gahm, J.; Grosskopf, R.; Jaeger, H.; Trautwein, F.

    1980-12-01

    An electronic system for image analysis was developed on the basis of low and medium cost integrated circuits. The printed circuit boards were designed, using the principles of modern digital electronics and data processing. The system consists of modules for automatic, semiautomatic and visual image analysis. They can be used for microscopical and macroscopical observations. Photographs can be evaluated, too. The automatic version is controlled by software modules adapted to various applications. The result is a system for image analysis suitable for many different measurement problems. The features contained in large image areas can be measured. For automatic routine analysis controlled by processing calculators the necessary software and hardware modules are available.

  4. Basics of image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral imaging technology has emerged as a powerful tool for quality and safety inspection of food and agricultural products and in precision agriculture over the past decade. Image analysis is a critical step in implementing hyperspectral imaging technology; it is aimed to improve the qualit...

  5. Image Analysis of Foods.

    PubMed

    Russ, John C

    2015-09-01

    The structure of foods, both natural and processed ones, is controlled by many variables ranging from biology to chemistry and mechanical forces. The structure also controls many of the properties of the food, including consumer acceptance, taste, mouthfeel, appearance, and so on, and nutrition. Imaging provides an important tool for measuring the structure of foods. This includes 2-dimensional (2D) images of surfaces and sections, for example, viewed in a microscope, as well as 3-dimensional (3D) images of internal structure as may be produced by confocal microscopy, or computed tomography and magnetic resonance imaging. The use of images also guides robotics for harvesting and sorting. Processing of images may be needed to calibrate colors, reduce noise, enhance detail, and delineate structure and dimensions. Measurement of structural information such as volume fraction and internal surface areas, as well as the analysis of object size, location, and shape in both 2- and 3-dimensional images is illustrated and described, with primary references and examples from a wide range of applications. PMID:26270611

  6. Picosecond Imaging Circuit Analysis

    NASA Astrophysics Data System (ADS)

    Kash, Jeffrey A.

    1998-03-01

    With ever-increasing complexity, probing the internal operation of a silicon IC becomes more challenging. Present methods of internal probing are becoming obsolete. We have discovered that a very weak picosecond pulse of light is emitted by each FET in a CMOS circuit whenever the circuit changes logic state. This pulsed emission can be simultaneously imaged and time resolved, using a technique we have named Picosecond Imaging Circuit Analysis (PICA). With a suitable imaging detector, PICA allows time resolved measurement on thousands of devices simultaneously. Computer videos made from measurements on real IC's will be shown. These videos, along with a more quantitative evaluation of the light emission, permit the complete operation of an IC to be measured in a non-invasive way with picosecond time resolution.

  7. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  8. Medical Image Analysis Facility

    NASA Technical Reports Server (NTRS)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  9. Digital Image Analysis of Cereals

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Image analysis is the extraction of meaningful information from images, mainly digital images by means of digital processing techniques. The field was established in the 1950s and coincides with the advent of computer technology, as image analysis is profoundly reliant on computer processing. As t...

  10. Radar image analysis utilizing junctive image metamorphosis

    NASA Astrophysics Data System (ADS)

    Krueger, Peter G.; Gouge, Sally B.; Gouge, Jim O.

    1998-09-01

    A feasibility study was initiated to investigate the ability of algorithms developed for medical sonogram image analysis, to be trained for extraction of cartographic information from synthetic aperture radar imagery. BioComputer Research Inc. has applied proprietary `junctive image metamorphosis' algorithms to cancer cell recognition and identification in ultrasound prostate images. These algorithms have been shown to support automatic radar image feature detection and identification. Training set images were used to develop determinants for representative point, line and area features, which were used on test images to identify and localize the features of interest. The software is computationally conservative; operating on a PC platform in real time. The algorithms are robust; having applicability to be trained for feature recognition on any digital imagery, not just those formed from reflected energy, such as sonograms and radar images. Applications include land mass characterization, feature identification, target recognition, and change detection.

  11. Statistical image analysis of longitudinal RAVENS images

    PubMed Central

    Lee, Seonjoo; Zipunnikov, Vadim; Reich, Daniel S.; Pham, Dzung L.

    2015-01-01

    Regional analysis of volumes examined in normalized space (RAVENS) are transformation images used in the study of brain morphometry. In this paper, RAVENS images are analyzed using a longitudinal variant of voxel-based morphometry (VBM) and longitudinal functional principal component analysis (LFPCA) for high-dimensional images. We demonstrate that the latter overcomes the limitations of standard longitudinal VBM analyses, which does not separate registration errors from other longitudinal changes and baseline patterns. This is especially important in contexts where longitudinal changes are only a small fraction of the overall observed variability, which is typical in normal aging and many chronic diseases. Our simulation study shows that LFPCA effectively separates registration error from baseline and longitudinal signals of interest by decomposing RAVENS images measured at multiple visits into three components: a subject-specific imaging random intercept that quantifies the cross-sectional variability, a subject-specific imaging slope that quantifies the irreversible changes over multiple visits, and a subject-visit specific imaging deviation. We describe strategies to identify baseline/longitudinal variation and registration errors combined with covariates of interest. Our analysis suggests that specific regional brain atrophy and ventricular enlargement are associated with multiple sclerosis (MS) disease progression. PMID:26539071

  12. Reflections on ultrasound image analysis.

    PubMed

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time. PMID:27503078

  13. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  14. Oncological image analysis: medical and molecular image analysis

    NASA Astrophysics Data System (ADS)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  15. Hyperspectral image analysis. A tutorial.

    PubMed

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-10-01

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case. PMID:26481986

  16. Histopathological Image Analysis: A Review

    PubMed Central

    Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent

    2010-01-01

    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804

  17. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  18. Image Analysis in Surgical Pathology.

    PubMed

    Lloyd, Mark C; Monaco, James P; Bui, Marilyn M

    2016-06-01

    Digitization of glass slides of surgical pathology samples facilitates a number of value-added capabilities beyond what a pathologist could previously do with a microscope. Image analysis is one of the most fundamental opportunities to leverage the advantages that digital pathology provides. The ability to quantify aspects of a digital image is an extraordinary opportunity to collect data with exquisite accuracy and reliability. In this review, we describe the history of image analysis in pathology and the present state of technology processes as well as examples of research and clinical use. PMID:27241112

  19. Image analysis for DNA sequencing

    NASA Astrophysics Data System (ADS)

    Palaniappan, Kannappan; Huang, Thomas S.

    1991-07-01

    There is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information.

  20. Anmap: Image and data analysis

    NASA Astrophysics Data System (ADS)

    Alexander, Paul; Waldram, Elizabeth; Titterington, David; Rees, Nick

    2014-11-01

    Anmap analyses and processes images and spectral data. Originally written for use in radio astronomy, much of its functionality is applicable to other disciplines; additional algorithms and analysis procedures allow direct use in, for example, NMR imaging and spectroscopy. Anmap emphasizes the analysis of data to extract quantitative results for comparison with theoretical models and/or other experimental data. To achieve this, Anmap provides a wide range of tools for analysis, fitting and modelling (including standard image and data processing algorithms). It also provides a powerful environment for users to develop their own analysis/processing tools either by combining existing algorithms and facilities with the very powerful command (scripting) language or by writing new routines in FORTRAN that integrate seamlessly with the rest of Anmap.

  1. Image analysis and quantitative morphology.

    PubMed

    Mandarim-de-Lacerda, Carlos Alberto; Fernandes-Santos, Caroline; Aguila, Marcia Barbosa

    2010-01-01

    Quantitative studies are increasingly found in the literature, particularly in the fields of development/evolution, pathology, and neurosciences. Image digitalization converts tissue images into a numeric form by dividing them into very small regions termed picture elements or pixels. Image analysis allows automatic morphometry of digitalized images, and stereology aims to understand the structural inner three-dimensional arrangement based on the analysis of slices showing two-dimensional information. To quantify morphological structures in an unbiased and reproducible manner, appropriate isotropic and uniform random sampling of sections, and updated stereological tools are needed. Through the correct use of stereology, a quantitative study can be performed with little effort; efficiency in stereology means as little counting as possible (little work), low cost (section preparation), but still good accuracy. This short text provides a background guide for non-expert morphologists. PMID:19960334

  2. Multispectral Imaging Broadens Cellular Analysis

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Amnis Corporation, a Seattle-based biotechnology company, developed ImageStream to produce sensitive fluorescence images of cells in flow. The company responded to an SBIR solicitation from Ames Research Center, and proposed to evaluate several methods of extending the depth of field for its ImageStream system and implement the best as an upgrade to its commercial products. This would allow users to view whole cells at the same time, rather than just one section of each cell. Through Phase I and II SBIR contracts, Ames provided Amnis the funding the company needed to develop this extended functionality. For NASA, the resulting high-speed image flow cytometry process made its way into Medusa, a life-detection instrument built to collect, store, and analyze sample organisms from erupting hydrothermal vents, and has the potential to benefit space flight health monitoring. On the commercial end, Amnis has implemented the process in ImageStream, combining high-resolution microscopy and flow cytometry in a single instrument, giving researchers the power to conduct quantitative analyses of individual cells and cell populations at the same time, in the same experiment. ImageStream is also built for many other applications, including cell signaling and pathway analysis; classification and characterization of peripheral blood mononuclear cell populations; quantitative morphology; apoptosis (cell death) assays; gene expression analysis; analysis of cell conjugates; molecular distribution; and receptor mapping and distribution.

  3. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for

  4. A computational image analysis glossary for biologists.

    PubMed

    Roeder, Adrienne H K; Cunha, Alexandre; Burl, Michael C; Meyerowitz, Elliot M

    2012-09-01

    Recent advances in biological imaging have resulted in an explosion in the quality and quantity of images obtained in a digital format. Developmental biologists are increasingly acquiring beautiful and complex images, thus creating vast image datasets. In the past, patterns in image data have been detected by the human eye. Larger datasets, however, necessitate high-throughput objective analysis tools to computationally extract quantitative information from the images. These tools have been developed in collaborations between biologists, computer scientists, mathematicians and physicists. In this Primer we present a glossary of image analysis terms to aid biologists and briefly discuss the importance of robust image analysis in developmental studies. PMID:22872081

  5. Target identification by image analysis.

    PubMed

    Fetz, V; Prochnow, H; Brönstrup, M; Sasse, F

    2016-05-01

    Covering: 1997 to the end of 2015Each biologically active compound induces phenotypic changes in target cells that are characteristic for its mode of action. These phenotypic alterations can be directly observed under the microscope or made visible by labelling structural elements or selected proteins of the cells with dyes. A comparison of the cellular phenotype induced by a compound of interest with the phenotypes of reference compounds with known cellular targets allows predicting its mode of action. While this approach has been successfully applied to the characterization of natural products based on a visual inspection of images, recent studies used automated microscopy and analysis software to increase speed and to reduce subjective interpretation. In this review, we give a general outline of the workflow for manual and automated image analysis, and we highlight natural products whose bacterial and eucaryotic targets could be identified through such approaches. PMID:26777141

  6. Planning applications in image analysis

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  7. Quantitative multi-image analysis for biomedical Raman spectroscopic imaging.

    PubMed

    Hedegaard, Martin A B; Bergholt, Mads S; Stevens, Molly M

    2016-05-01

    Imaging by Raman spectroscopy enables unparalleled label-free insights into cell and tissue composition at the molecular level. With established approaches limited to single image analysis, there are currently no general guidelines or consensus on how to quantify biochemical components across multiple Raman images. Here, we describe a broadly applicable methodology for the combination of multiple Raman images into a single image for analysis. This is achieved by removing image specific background interference, unfolding the series of Raman images into a single dataset, and normalisation of each Raman spectrum to render comparable Raman images. Multivariate image analysis is finally applied to derive the contributing 'pure' biochemical spectra for relative quantification. We present our methodology using four independently measured Raman images of control cells and four images of cells treated with strontium ions from substituted bioactive glass. We show that the relative biochemical distribution per area of the cells can be quantified. In addition, using k-means clustering, we are able to discriminate between the two cell types over multiple Raman images. This study shows a streamlined quantitative multi-image analysis tool for improving cell/tissue characterisation and opens new avenues in biomedical Raman spectroscopic imaging. PMID:26833935

  8. Grid computing in image analysis

    PubMed Central

    2011-01-01

    Diagnostic surgical pathology or tissue–based diagnosis still remains the most reliable and specific diagnostic medical procedure. The development of whole slide scanners permits the creation of virtual slides and to work on so-called virtual microscopes. In addition to interactive work on virtual slides approaches have been reported that introduce automated virtual microscopy, which is composed of several tools focusing on quite different tasks. These include evaluation of image quality and image standardization, analysis of potential useful thresholds for object detection and identification (segmentation), dynamic segmentation procedures, adjustable magnification to optimize feature extraction, and texture analysis including image transformation and evaluation of elementary primitives. Grid technology seems to possess all features to efficiently target and control the specific tasks of image information and detection in order to obtain a detailed and accurate diagnosis. Grid technology is based upon so-called nodes that are linked together and share certain communication rules in using open standards. Their number and functionality can vary according to the needs of a specific user at a given point in time. When implementing automated virtual microscopy with Grid technology, all of the five different Grid functions have to be taken into account, namely 1) computation services, 2) data services, 3) application services, 4) information services, and 5) knowledge services. Although all mandatory tools of automated virtual microscopy can be implemented in a closed or standardized open system, Grid technology offers a new dimension to acquire, detect, classify, and distribute medical image information, and to assure quality in tissue–based diagnosis. PMID:21516880

  9. Automated image analysis of uterine cervical images

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  10. Neural network ultrasound image analysis

    NASA Astrophysics Data System (ADS)

    Schneider, Alexander C.; Brown, David G.; Pastel, Mary S.

    1993-09-01

    Neural network based analysis of ultrasound image data was carried out on liver scans of normal subjects and those diagnosed with diffuse liver disease. In a previous study, ultrasound images from a group of normal volunteers, Gaucher's disease patients, and hepatitis patients were obtained by Garra et al., who used classical statistical methods to distinguish from among these three classes. In the present work, neural network classifiers were employed with the same image features found useful in the previous study for this task. Both standard backpropagation neural networks and a recently developed biologically-inspired network called Dystal were used. Classification performance as measured by the area under a receiver operating characteristic curve was generally excellent for the back propagation networks and was roughly comparable to that of classical statistical discriminators tested on the same data set and documented in the earlier study. Performance of the Dystal network was significantly inferior; however, this may be due to the choice of network parameter. Potential methods for enhancing network performance was identified.

  11. Spreadsheet-like image analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Paul

    1992-08-01

    This report describes the design of a new software system being built by the Army to support and augment automated nondestructive inspection (NDI) on-line equipment implemented by the Army for detection of defective manufactured items. The new system recalls and post-processes (off-line) the NDI data sets archived by the on-line equipment for the purpose of verifying the correctness of the inspection analysis paradigms, of developing better analysis paradigms and to gather statistics on the defects of the items inspected. The design of the system is similar to that of a spreadsheet, i.e., an array of cells which may be programmed to contain functions with arguments being data from other cells and whose resultant is the output of that cell's function. Unlike a spreadsheet, the arguments and the resultants of a cell may be a matrix such as a two-dimensional matrix of picture elements (pixels). Functions include matrix mathematics, neural networks and image processing as well as those ordinarily found in spreadsheets. The system employs all of the common environmental supports of the Macintosh computer, which is the hardware platform. The system allows the resultant of a cell to be displayed in any of multiple formats such as a matrix of numbers, text, an image, or a chart. Each cell is a window onto the resultant. Like a spreadsheet if the input value of any cell is changed its effect is cascaded into the resultants of all cells whose functions use that value directly or indirectly. The system encourages the user to play what-of games, as ordinary spreadsheets do.

  12. IMAGE ANALYSIS ALGORITHMS FOR DUAL MODE IMAGING SYSTEMS

    SciTech Connect

    Robinson, Sean M.; Jarman, Kenneth D.; Miller, Erin A.; Misner, Alex C.; Myjak, Mitchell J.; Pitts, W. Karl; Seifert, Allen; Seifert, Carolyn E.; Woodring, Mitchell L.

    2010-06-11

    The level of detail discernable in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes where information barriers are mandatory. However, if a balance can be struck between sufficient information barriers and feature extraction to verify or identify objects of interest, imaging may significantly advance verification efforts. This paper describes the development of combined active (conventional) radiography and passive (auto) radiography techniques for imaging sensitive items assuming that comparison images cannot be furnished. Three image analysis algorithms are presented, each of which reduces full image information to non-sensitive feature information and ultimately is intended to provide only a yes/no response verifying features present in the image. These algorithms are evaluated on both their technical performance in image analysis and their application with or without an explicitly constructed information barrier. The first algorithm reduces images to non-invertible pixel intensity histograms, retaining only summary information about the image that can be used in template comparisons. This one-way transform is sufficient to discriminate between different image structures (in terms of area and density) without revealing unnecessary specificity. The second algorithm estimates the attenuation cross-section of objects of known shape based on transition characteristics around the edge of the object’s image. The third algorithm compares the radiography image with the passive image to discriminate dense, radioactive material from point sources or inactive dense material. By comparing two images and reporting only a single statistic from the combination thereof, this algorithm can operate entirely behind an information barrier stage. Together with knowledge of the radiography system, the use of these algorithms in combination can be used to improve verification capability to inspection regimes and improve

  13. FFDM image quality assessment using computerized image texture analysis

    NASA Astrophysics Data System (ADS)

    Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina

    2010-04-01

    Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.

  14. Image analysis applications for grain science

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Steele, James L.

    1991-02-01

    Morphometrical features of single grain kernels or particles were used to discriminate two visibly similar wheat varieties foreign material in wheat hardsoft and spring-winter wheat classes and whole from broken corn kernels. Milled fractions of hard and soft wheat were evaluated using textural image analysis. Color image analysis of sound and mold damaged corn kernels yielded high recognition rates. The studies collectively demonstrate the potential for automated classification and assessment of grain quality using image analysis.

  15. Automatic processing, analysis, and recognition of images

    NASA Astrophysics Data System (ADS)

    Abrukov, Victor S.; Smirnov, Evgeniy V.; Ivanov, Dmitriy G.

    2004-11-01

    New approaches and computer codes (A&CC) for automatic processing, analysis and recognition of images are offered. The A&CC are based on presentation of object image as a collection of pixels of various colours and consecutive automatic painting of distinguished itself parts of the image. The A&CC have technical objectives centred on such direction as: 1) image processing, 2) image feature extraction, 3) image analysis and some others in any consistency and combination. The A&CC allows to obtain various geometrical and statistical parameters of object image and its parts. Additional possibilities of the A&CC usage deal with a usage of artificial neural networks technologies. We believe that A&CC can be used at creation of the systems of testing and control in a various field of industry and military applications (airborne imaging systems, tracking of moving objects), in medical diagnostics, at creation of new software for CCD, at industrial vision and creation of decision-making system, etc. The opportunities of the A&CC are tested at image analysis of model fires and plumes of the sprayed fluid, ensembles of particles, at a decoding of interferometric images, for digitization of paper diagrams of electrical signals, for recognition of the text, for elimination of a noise of the images, for filtration of the image, for analysis of the astronomical images and air photography, at detection of objects.

  16. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  17. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  18. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M.

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  19. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  20. Millimeter-wave sensor image analysis

    NASA Technical Reports Server (NTRS)

    Wilson, William J.; Suess, Helmut

    1989-01-01

    Images of an airborne, scanning, radiometer operating at a frequency of 98 GHz, have been analyzed. The mm-wave images were obtained in 1985/1986 using the JPL mm-wave imaging sensor. The goal of this study was to enhance the information content of these images and make their interpretation easier for human analysis. In this paper, a visual interpretative approach was used for information extraction from the images. This included application of nonlinear transform techniques for noise reduction and for color, contrast and edge enhancement. Results of the techniques on selected mm-wave images are presented.

  1. Image processing software for imaging spectrometry data analysis

    NASA Technical Reports Server (NTRS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  2. Quantitative analysis of digital microscope images.

    PubMed

    Wolf, David E; Samarasekera, Champika; Swedlow, Jason R

    2013-01-01

    This chapter discusses quantitative analysis of digital microscope images and presents several exercises to provide examples to explain the concept. This chapter also presents the basic concepts in quantitative analysis for imaging, but these concepts rest on a well-established foundation of signal theory and quantitative data analysis. This chapter presents several examples for understanding the imaging process as a transformation from sample to image and the limits and considerations of quantitative analysis. This chapter introduces to the concept of digitally correcting the images and also focuses on some of the more critical types of data transformation and some of the frequently encountered issues in quantization. Image processing represents a form of data processing. There are many examples of data processing such as fitting the data to a theoretical curve. In all these cases, it is critical that care is taken during all steps of transformation, processing, and quantization. PMID:23931513

  3. Multiscale Analysis of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C. A.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is that there is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also cursed us with an increased amount of higher complexity data than previous missions. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present a preliminary analysis of multiscale techniques applied to solar image data. Specifically, we explore the use of the 2-d wavelet transform and related transforms with EIT, LASCO and TRACE images. This work was supported by NASA contract NAS5-00220.

  4. A 3D image analysis tool for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  5. Image Reconstruction Using Analysis Model Prior.

    PubMed

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  6. Image Reconstruction Using Analysis Model Prior

    PubMed Central

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  7. Description, Recognition and Analysis of Biological Images

    SciTech Connect

    Yu Donggang; Jin, Jesse S.; Luo Suhuai; Pham, Tuan D.; Lai Wei

    2010-01-25

    Description, recognition and analysis biological images plays an important role for human to describe and understand the related biological information. The color images are separated by color reduction. A new and efficient linearization algorithm is introduced based on some criteria of difference chain code. A series of critical points is got based on the linearized lines. The series of curvature angle, linearity, maximum linearity, convexity, concavity and bend angle of linearized lines are calculated from the starting line to the end line along all smoothed contours. The useful method can be used for shape description and recognition. The analysis, decision, classification of the biological images are based on the description of morphological structures, color information and prior knowledge, which are associated each other. The efficiency of the algorithms is described based on two applications. One application is the description, recognition and analysis of color flower images. Another one is related to the dynamic description, recognition and analysis of cell-cycle images.

  8. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  9. Optical Analysis of Microscope Images

    NASA Astrophysics Data System (ADS)

    Biles, Jonathan R.

    Microscope images were analyzed with coherent and incoherent light using analog optical techniques. These techniques were found to be useful for analyzing large numbers of nonsymbolic, statistical microscope images. In the first part phase coherent transparencies having 20-100 human multiple myeloma nuclei were simultaneously photographed at 100 power magnification using high resolution holographic film developed to high contrast. An optical transform was obtained by focussing the laser onto each nuclear image and allowing the diffracted light to propagate onto a one dimensional photosensor array. This method reduced the data to the position of the first two intensity minima and the intensity of successive maxima. These values were utilized to estimate the four most important cancer detection clues of nuclear size, shape, darkness, and chromatin texture. In the second part, the geometric and holographic methods of phase incoherent optical processing were investigated for pattern recognition of real-time, diffuse microscope images. The theory and implementation of these processors was discussed in view of their mutual problems of dimness, image bias, and detector resolution. The dimness problem was solved by either using a holographic correlator or a speckle free laser microscope. The latter was built using a spinning tilted mirror which caused the speckle to change so quickly that it averaged out during the exposure. To solve the bias problem low image bias templates were generated by four techniques: microphotography of samples, creation of typical shapes by computer graphics editor, transmission holography of photoplates of samples, and by spatially coherent color image bias removal. The first of these templates was used to perform correlations with bacteria images. The aperture bias was successfully removed from the correlation with a video frame subtractor. To overcome the limited detector resolution it is necessary to discover some analog nonlinear intensity

  10. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  11. Scale-Specific Multifractal Medical Image Analysis

    PubMed Central

    Braverman, Boris

    2013-01-01

    Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value. PMID:24023588

  12. Materials characterization through quantitative digital image analysis

    SciTech Connect

    J. Philliber; B. Antoun; B. Somerday; N. Yang

    2000-07-01

    A digital image analysis system has been developed to allow advanced quantitative measurement of microstructural features. This capability is maintained as part of the microscopy facility at Sandia, Livermore. The system records images digitally, eliminating the use of film. Images obtained from other sources may also be imported into the system. Subsequent digital image processing enhances image appearance through the contrast and brightness adjustments. The system measures a variety of user-defined microstructural features--including area fraction, particle size and spatial distributions, grain sizes and orientations of elongated particles. These measurements are made in a semi-automatic mode through the use of macro programs and a computer controlled translation stage. A routine has been developed to create large montages of 50+ separate images. Individual image frames are matched to the nearest pixel to create seamless montages. Results from three different studies are presented to illustrate the capabilities of the system.

  13. Factor Analysis of the Image Correlation Matrix.

    ERIC Educational Resources Information Center

    Kaiser, Henry F.; Cerny, Barbara A.

    1979-01-01

    Whether to factor the image correlation matrix or to use a new model with an alpha factor analysis of it is mentioned, with particular reference to the determinacy problem. It is pointed out that the distribution of the images is sensibly multivariate normal, making for "better" factor analyses. (Author/CTM)

  14. Viewing angle analysis of integral imaging

    NASA Astrophysics Data System (ADS)

    Wang, Hong-Xia; Wu, Chun-Hong; Yang, Yang; Zhang, Lan

    2007-12-01

    Integral imaging (II) is a technique capable of displaying 3D images with continuous parallax in full natural color. It is becoming the most perspective technique in developing next generation three-dimensional TV (3DTV) and visualization field due to its outstanding advantages. However, most of conventional integral images are restricted by its narrow viewing angle. One reason is that the range in which a reconstructed integral image can be displayed with consistent parallax is limited. The other is that the aperture of system is finite. By far many methods , an integral imaging method to enhance the viewing angle of integral images has been proposed. Nevertheless, except Ren's MVW (Maximum Viewing Width) most of these methods involve complex hardware and modifications of optical system, which usually bring other disadvantages and make operation more difficult. At the same time the cost of these systems should be higher. In order to simplify optical systems, this paper systematically analyzes the viewing angle of traditional integral images instead of modified ones. Simultaneously for the sake of cost the research was based on computer generated integral images (CGII). With the analysis result we can know clearly how the viewing angle can be enhanced and how the image overlap or image flipping can be avoided. The result also promotes the development of optical instruments. Based on theoretical analysis, preliminary calculation was done to demonstrate how the other viewing properties which are closely related with the viewing angle, such as viewing distance, viewing zone, lens pitch, and etc. affect the viewing angle.

  15. Depth-based selective image reconstruction using spatiotemporal image analysis

    NASA Astrophysics Data System (ADS)

    Haga, Tetsuji; Sumi, Kazuhiko; Hashimoto, Manabu; Seki, Akinobu

    1999-03-01

    In industrial plants, a remote monitoring system which removes physical tour inspection is often considered desirable. However the image sequence given from the mobile inspection robot is hard to see because interested objects are often partially occluded by obstacles such as pillars or fences. Our aim is to improve the image sequence that increases the efficiency and reliability of remote visual inspection. We propose a new depth-based image processing technique, which removes the needless objects from the foreground and recovers the occluded background electronically. Our algorithm is based on spatiotemporal analysis that enables fine and dense depth estimation, depth-based precise segmentation, and accurate interpolation. We apply this technique to a real time sequence given from the mobile inspection robot. The resulted image sequence is satisfactory in that the operator can make correct visual inspection with less fatigue.

  16. A Robust Actin Filaments Image Analysis Framework.

    PubMed

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-08-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a 'cartoon' part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the 'cartoon' image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in

  17. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  18. Linear digital imaging system fidelity analysis

    NASA Technical Reports Server (NTRS)

    Park, Stephen K.

    1989-01-01

    The combined effects of imaging gathering, sampling and reconstruction are analyzed in terms of image fidelity. The analysis is based upon a standard end-to-end linear system model which is sufficiently general so that the results apply to most line-scan and sensor-array imaging systems. Shift-variant sampling effects are accounted for with an expected value analysis based upon the use of a fixed deterministic input scene which is randomly shifted (mathematically) relative to the sampling grid. This random sample-scene phase approach has been used successfully by the author and associates in several previous related papers.

  19. Infrared image processing and data analysis

    NASA Astrophysics Data System (ADS)

    Ibarra-Castanedo, C.; González, D.; Klein, M.; Pilla, M.; Vallerand, S.; Maldague, X.

    2004-12-01

    Infrared thermography in nondestructive testing provides images (thermograms) in which zones of interest (defects) appear sometimes as subtle signatures. In this context, raw images are not often appropriate since most will be missed. In some other cases, what is needed is a quantitative analysis such as for defect detection and characterization. In this paper, presentation is made of various methods of data analysis required either at preprocessing and/or processing images. References from literature are provided for briefly discussed known methods while novelties are elaborated in more details within the text which include also experimental results.

  20. Malware Analysis Using Visualized Image Matrices

    PubMed Central

    Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202

  1. Malware analysis using visualized image matrices.

    PubMed

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202

  2. Image texture analysis of crushed wheat kernels

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Martin, C. R.; Steele, James L.; Dempster, Richard E.

    1992-03-01

    The development of new approaches for wheat hardness assessment may impact the grain industry in marketing, milling, and breeding. This study used image texture features for wheat hardness evaluation. Application of digital imaging to grain for grading purposes is principally based on morphometrical (shape and size) characteristics of the kernels. A composite sample of 320 kernels for 17 wheat varieties were collected after testing and crushing with a single kernel hardness characterization meter. Six wheat classes where represented: HRW, HRS, SRW, SWW, Durum, and Club. In this study, parameters which characterize texture or spatial distribution of gray levels of an image were determined and used to classify images of crushed wheat kernels. The texture parameters of crushed wheat kernel images were different depending on class, hardness and variety of the wheat. Image texture analysis of crushed wheat kernels showed promise for use in class, hardness, milling quality, and variety discrimination.

  3. Breast tomosynthesis imaging configuration analysis.

    PubMed

    Rayford, Cleveland E; Zhou, Weihua; Chen, Ying

    2013-01-01

    Traditional two-dimensional (2D) X-ray mammography is the most commonly used method for breast cancer diagnosis. Recently, a three-dimensional (3D) Digital Breast Tomosynthesis (DBT) system has been invented, which is likely to challenge the current mammography technology. The DBT system provides stunning 3D information, giving physicians increased detail of anatomical information, while reducing the chance of false negative screening. In this research, two reconstruction algorithms, Back Projection (BP) and Shift-And-Add (SAA), were used to investigate and compare View Angle (VA) and the number of projection images (N) with parallel imaging configurations. In addition, in order to better determine which method displayed better-quality imaging, Modulation Transfer Function (MTF) analyses were conducted with both algorithms, ultimately producing results which improve upon better breast cancer detection. Research studies find evidence that early detection of the disease is the best way to conquer breast cancer, and earlier detection results in the increase of life span for the affected person. PMID:23900440

  4. Breast cancer histopathology image analysis: a review.

    PubMed

    Veta, Mitko; Pluim, Josien P W; van Diest, Paul J; Viergever, Max A

    2014-05-01

    This paper presents an overview of methods that have been proposed for the analysis of breast cancer histopathology images. This research area has become particularly relevant with the advent of whole slide imaging (WSI) scanners, which can perform cost-effective and high-throughput histopathology slide digitization, and which aim at replacing the optical microscope as the primary tool used by pathologist. Breast cancer is the most prevalent form of cancers among women, and image analysis methods that target this disease have a huge potential to reduce the workload in a typical pathology lab and to improve the quality of the interpretation. This paper is meant as an introduction for nonexperts. It starts with an overview of the tissue preparation, staining and slide digitization processes followed by a discussion of the different image processing techniques and applications, ranging from analysis of tissue staining to computer-aided diagnosis, and prognosis of breast cancer patients. PMID:24759275

  5. Principal component analysis of scintimammographic images.

    PubMed

    Bonifazzi, Claudio; Cinti, Maria Nerina; Vincentis, Giuseppe De; Finos, Livio; Muzzioli, Valerio; Betti, Margherita; Nico, Lanconelli; Tartari, Agostino; Pani, Roberto

    2006-01-01

    The recent development of new gamma imagers based on scintillation array with high spatial resolution, has strongly improved the possibility of detecting sub-centimeter cancer in Scintimammography. However, Compton scattering contamination remains the main drawback since it limits the sensitivity of tumor detection. Principal component image analysis (PCA), recently introduced in scintimam nographic imaging, is a data reduction technique able to represent the radiation emitted from chest, breast healthy and damaged tissues as separated images. From these images a Scintimammography can be obtained where the Compton contamination is "removed". In the present paper we compared the PCA reconstructed images with the conventional scintimammographic images resulting from the photopeak (Ph) energy window. Data coming from a clinical trial were used. For both kinds of images the tumor presence was quantified by evaluating the t-student statistics for independent sample as a measure of the signal-to-noise ratio (SNR). Since the absence of Compton scattering, the PCA reconstructed images shows a better noise suppression and allows a more reliable diagnostics in comparison with the images obtained by the photopeak energy window, reducing the trend in producing false positive. PMID:17646004

  6. Image analysis in comparative genomic hybridization

    SciTech Connect

    Lundsteen, C.; Maahr, J.; Christensen, B.

    1995-01-01

    Comparative genomic hybridization (CGH) is a new technique by which genomic imbalances can be detected by combining in situ suppression hybridization of whole genomic DNA and image analysis. We have developed software for rapid, quantitative CGH image analysis by a modification and extension of the standard software used for routine karyotyping of G-banded metaphase spreads in the Magiscan chromosome analysis system. The DAPI-counterstained metaphase spread is karyotyped interactively. Corrections for image shifts between the DAPI, FITC, and TRITC images are done manually by moving the three images relative to each other. The fluorescence background is subtracted. A mean filter is applied to smooth the FITC and TRITC images before the fluorescence ratio between the individual FITC and TRITC-stained chromosomes is computed pixel by pixel inside the area of the chromosomes determined by the DAPI boundaries. Fluorescence intensity ratio profiles are generated, and peaks and valleys indicating possible gains and losses of test DNA are marked if they exceed ratios below 0.75 and above 1.25. By combining the analysis of several metaphase spreads, consistent findings of gains and losses in all or almost all spreads indicate chromosomal imbalance. Chromosomal imbalances are detected either by visual inspection of fluorescence ratio (FR) profiles or by a statistical approach that compares FR measurements of the individual case with measurements of normal chromosomes. The complete analysis of one metaphase can be carried out in approximately 10 minutes. 8 refs., 7 figs., 1 tab.

  7. Quantitative analysis of qualitative images

    NASA Astrophysics Data System (ADS)

    Hockney, David; Falco, Charles M.

    2005-03-01

    We show optical evidence that demonstrates artists as early as Jan van Eyck and Robert Campin (c1425) used optical projections as aids for producing their paintings. We also have found optical evidence within works by later artists, including Bermejo (c1475), Lotto (c1525), Caravaggio (c1600), de la Tour (c1650), Chardin (c1750) and Ingres (c1825), demonstrating a continuum in the use of optical projections by artists, along with an evolution in the sophistication of that use. However, even for paintings where we have been able to extract unambiguous, quantitative evidence of the direct use of optical projections for producing certain of the features, this does not mean that paintings are effectively photographs. Because the hand and mind of the artist are intimately involved in the creation process, understanding these complex images requires more than can be obtained from only applying the equations of geometrical optics.

  8. Hybrid µCT-FMT imaging and image analysis

    PubMed Central

    Zafarnia, Sara; Babler, Anne; Jahnen-Dechent, Willi; Lammers, Twan; Lederle, Wiltrud; Kiessling, Fabian

    2015-01-01

    Fluorescence-mediated tomography (FMT) enables longitudinal and quantitative determination of the fluorescence distribution in vivo and can be used to assess the biodistribution of novel probes and to assess disease progression using established molecular probes or reporter genes. The combination with an anatomical modality, e.g., micro computed tomography (µCT), is beneficial for image analysis and for fluorescence reconstruction. We describe a protocol for multimodal µCT-FMT imaging including the image processing steps necessary to extract quantitative measurements. After preparing the mice and performing the imaging, the multimodal data sets are registered. Subsequently, an improved fluorescence reconstruction is performed, which takes into account the shape of the mouse. For quantitative analysis, organ segmentations are generated based on the anatomical data using our interactive segmentation tool. Finally, the biodistribution curves are generated using a batch-processing feature. We show the applicability of the method by assessing the biodistribution of a well-known probe that binds to bones and joints. PMID:26066033

  9. Particle Pollution Estimation Based on Image Analysis.

    PubMed

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  10. Particle Pollution Estimation Based on Image Analysis

    PubMed Central

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  11. Membrane composition analysis by imaging mass spectrometry

    SciTech Connect

    Boxer, S G; Kraft, M L; Longo, M; Hutcheon, I D; Weber, P K

    2006-03-29

    Membranes on solid supports offer an ideal format for imaging. Secondary ion mass spectrometry (SIMS) can be used to obtain composition information on membrane-associated components. Using the NanoSIMS50, images of composition variations in membrane domains can be obtained with a lateral resolution better than 100 nm. By suitable calibration, these variations in composition can be translated into a quantitative analysis of the membrane composition. Progress towards imaging small phase-separated lipid domains, membrane-associated proteins and natural biological membranes will be described.

  12. Data analysis for GOPEX image frames

    NASA Technical Reports Server (NTRS)

    Levine, B. M.; Shaik, K. S.; Yan, T.-Y.

    1993-01-01

    The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.

  13. Design Criteria For Networked Image Analysis System

    NASA Astrophysics Data System (ADS)

    Reader, Cliff; Nitteberg, Alan

    1982-01-01

    Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.

  14. VAICo: visual analysis for image comparison.

    PubMed

    Schmidt, Johanna; Gröller, M Eduard; Bruckner, Stefan

    2013-12-01

    Scientists, engineers, and analysts are confronted with ever larger and more complex sets of data, whose analysis poses special challenges. In many situations it is necessary to compare two or more datasets. Hence there is a need for comparative visualization tools to help analyze differences or similarities among datasets. In this paper an approach for comparative visualization for sets of images is presented. Well-established techniques for comparing images frequently place them side-by-side. A major drawback of such approaches is that they do not scale well. Other image comparison methods encode differences in images by abstract parameters like color. In this case information about the underlying image data gets lost. This paper introduces a new method for visualizing differences and similarities in large sets of images which preserves contextual information, but also allows the detailed analysis of subtle variations. Our approach identifies local changes and applies cluster analysis techniques to embed them in a hierarchy. The results of this process are then presented in an interactive web application which allows users to rapidly explore the space of differences and drill-down on particular features. We demonstrate the flexibility of our approach by applying it to multiple distinct domains. PMID:24051775

  15. A pairwise image analysis with sparse decomposition

    NASA Astrophysics Data System (ADS)

    Boucher, A.; Cloppet, F.; Vincent, N.

    2013-02-01

    This paper aims to detect the evolution between two images representing the same scene. The evolution detection problem has many practical applications, especially in medical images. Indeed, the concept of a patient "file" implies the joint analysis of different acquisitions taken at different times, and the detection of significant modifications. The research presented in this paper is carried out within the application context of the development of computer assisted diagnosis (CAD) applied to mammograms. It is performed on already registered pair of images. As the registration is never perfect, we must develop a comparison method sufficiently adapted to detect real small differences between comparable tissues. In many applications, the assessment of similarity used during the registration step is also used for the interpretation step that yields to prompt suspicious regions. In our case registration is assumed to match the spatial coordinates of similar anatomical elements. In this paper, in order to process the medical images at tissue level, the image representation is based on elementary patterns, therefore seeking patterns, not pixels. Besides, as the studied images have low entropy, the decomposed signal is expressed in a parsimonious way. Parsimonious representations are known to help extract the significant structures of a signal, and generate a compact version of the data. This change of representation should allow us to compare the studied images in a short time, thanks to the low weight of the images thus represented, while maintaining a good representativeness. The good precision of our results show the approach efficiency.

  16. Image analysis of insulation mineral fibres.

    PubMed

    Talbot, H; Lee, T; Jeulin, D; Hanton, D; Hobbs, L W

    2000-12-01

    We present two methods for measuring the diameter and length of man-made vitreous fibres based on the automated image analysis of scanning electron microscopy images. The fibres we want to measure are used in materials such as glass wool, which in turn are used for thermal and acoustic insulation. The measurement of the diameters and lengths of these fibres is used by the glass wool industry for quality control purposes. To obtain reliable quality estimators, the measurement of several hundred images is necessary. These measurements are usually obtained manually by operators. Manual measurements, although reliable when performed by skilled operators, are slow due to the need for the operators to rest often to retain their ability to spot faint fibres on noisy backgrounds. Moreover, the task of measuring thousands of fibres every day, even with the help of semi-automated image analysis systems, is dull and repetitive. The need for an automated procedure which could replace manual measurements is quite real. For each of the two methods that we propose to accomplish this task, we present the sample preparation, the microscope setting and the image analysis algorithms used for the segmentation of the fibres and for their measurement. We also show how a statistical analysis of the results can alleviate most measurement biases, and how we can estimate the true distribution of fibre lengths by diameter class by measuring only the lengths of the fibres visible in the field of view. PMID:11106965

  17. Automated eXpert Spectral Image Analysis

    Energy Science and Technology Software Center (ESTSC)

    2003-11-25

    AXSIA performs automated factor analysis of hyperspectral images. In such images, a complete spectrum is collected an each point in a 1-, 2- or 3- dimensional spatial array. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful information. Multivariate factor analysis techniques have proven effective for extracting the essential information from high dimensional data sets into a limtedmore » number of factors that describe the spectral characteristics and spatial distributions of the pure components comprising the sample. AXSIA provides tools to estimate different types of factor models including Singular Value Decomposition (SVD), Principal Component Analysis (PCA), PCA with factor rotation, and Alternating Least Squares-based Multivariate Curve Resolution (MCR-ALS). As part of the analysis process, AXSIA can automatically estimate the number of pure components that comprise the data and can scale the data to account for Poisson noise. The data analysis methods are fundamentally based on eigenanalysis of the data crossproduct matrix coupled with orthogonal eigenvector rotation and constrained alternating least squares refinement. A novel method for automatically determining the number of significant components, which is based on the eigenvalues of the crossproduct matrix, has also been devised and implemented. The data can be compressed spectrally via PCA and spatially through wavelet transforms, and algorithms have been developed that perform factor analysis in the transform domain while retaining full spatial and spectral resolution in the final result. These latter innovations enable the analysis of larger-than core-memory spectrum-images. AXSIA was designed to perform automated chemical phase analysis of spectrum-images acquired by a variety of chemical imaging techniques. Successful applications include Energy Dispersive X-ray Spectroscopy, X

  18. Objective facial photograph analysis using imaging software.

    PubMed

    Pham, Annette M; Tollefson, Travis T

    2010-05-01

    Facial analysis is an integral part of the surgical planning process. Clinical photography has long been an invaluable tool in the surgeon's practice not only for accurate facial analysis but also for enhancing communication between the patient and surgeon, for evaluating postoperative results, for medicolegal documentation, and for educational and teaching opportunities. From 35-mm slide film to the digital technology of today, clinical photography has benefited greatly from technological advances. With the development of computer imaging software, objective facial analysis becomes easier to perform and less time consuming. Thus, while the original purpose of facial analysis remains the same, the process becomes much more efficient and allows for some objectivity. Although clinical judgment and artistry of technique is never compromised, the ability to perform objective facial photograph analysis using imaging software may become the standard in facial plastic surgery practices in the future. PMID:20511080

  19. Motion Analysis From Television Images

    NASA Astrophysics Data System (ADS)

    Silberberg, George G.; Keller, Patrick N.

    1982-02-01

    The Department of Defense ranges have relied on photographic instrumentation for gathering data of firings for all types of ordnance. A large inventory of cameras are available on the market that can be used for these tasks. A new set of optical instrumentation is beginning to appear which, in many cases, can directly replace photographic cameras for a great deal of the work being performed now. These are television cameras modified so they can stop motion, see in the dark, perform under hostile environments, and provide real time information. This paper discusses techniques for modifying television cameras so they can be used for motion analysis.

  20. Endoscopic image analysis in semantic space.

    PubMed

    Kwitt, R; Vasconcelos, N; Rasiwasia, N; Uhl, A; Davis, B; Häfner, M; Wrba, F

    2012-10-01

    A novel approach to the design of a semantic, low-dimensional, encoding for endoscopic imagery is proposed. This encoding is based on recent advances in scene recognition, where semantic modeling of image content has gained considerable attention over the last decade. While the semantics of scenes are mainly comprised of environmental concepts such as vegetation, mountains or sky, the semantics of endoscopic imagery are medically relevant visual elements, such as polyps, special surface patterns, or vascular structures. The proposed semantic encoding differs from the representations commonly used in endoscopic image analysis (for medical decision support) in that it establishes a semantic space, where each coordinate axis has a clear human interpretation. It is also shown to establish a connection to Riemannian geometry, which enables principled solutions to a number of problems that arise in both physician training and clinical practice. This connection is exploited by leveraging results from information geometry to solve problems such as (1) recognition of important semantic concepts, (2) semantically-focused image browsing, and (3) estimation of the average-case semantic encoding for a collection of images that share a medically relevant visual detail. The approach can provide physicians with an easily interpretable, semantic encoding of visual content, upon which further decisions, or operations, can be naturally carried out. This is contrary to the prevalent practice in endoscopic image analysis for medical decision support, where image content is primarily captured by discriminative, high-dimensional, appearance features, which possess discriminative power but lack human interpretability. PMID:22717411

  1. Endoscopic Image Analysis in Semantic Space

    PubMed Central

    Kwitt, R.; Vasconcelos, N.; Rasiwasia, N.; Uhl, A.; Davis, B.; Häfner, M.; Wrba, F.

    2013-01-01

    A novel approach to the design of a semantic, low-dimensional, encoding for endoscopic imagery is proposed. This encoding is based on recent advances in scene recognition, where semantic modeling of image content has gained considerable attention over the last decade. While the semantics of scenes are mainly comprised of environmental concepts such as vegetation, mountains or sky, the semantics of endoscopic imagery are medically relevant visual elements, such as polyps, special surface patterns, or vascular structures. The proposed semantic encoding differs from the representations commonly used in endoscopic image analysis (for medical decision support) in that it establishes a semantic space, where each coordinate axis has a clear human interpretation. It is also shown to establish a connection to Riemannian geometry, which enables principled solutions to a number of problems that arise in both physician training and clinical practice. This connection is exploited by leveraging results from information geometry to solve problems such as 1) recognition of important semantic concepts, 2) semantically-focused image browsing, and 3) estimation of the average-case semantic encoding for a collection of images that share a medically relevant visual detail. The approach can provide physicians with an easily interpretable, semantic encoding of visual content, upon which further decisions, or operations, can be naturally carried out. This is contrary to the prevalent practice in endoscopic image analysis for medical decision support, where image content is primarily captured by discriminative, high-dimensional, appearance features, which possess discriminative power but lack human interpretability. PMID:22717411

  2. Analysis of an interferometric Stokes imaging polarimeter

    NASA Astrophysics Data System (ADS)

    Murali, Sukumar

    Estimation of Stokes vector components from an interferometric fringe encoded image is a novel way of measuring the State Of Polarization (SOP) distribution across a scene. Imaging polarimeters employing interferometric techniques encode SOP in- formation across a scene in a single image in the form of intensity fringes. The lack of moving parts and use of a single image eliminates the problems of conventional polarimetry - vibration, spurious signal generation due to artifacts, beam wander, and need for registration routines. However, interferometric polarimeters are limited by narrow bandpass and short exposure time operations which decrease the Signal to Noise Ratio (SNR) defined as the ratio of the mean photon count to the standard deviation in the detected image. A simulation environment for designing an Interferometric Stokes Imaging polarimeter (ISIP) and a detector with noise effects is created and presented. Users of this environment are capable of imaging an object with defined SOP through an ISIP onto a detector producing a digitized image output. The simulation also includes bandpass imaging capabilities, control of detector noise, and object brightness levels. The Stokes images are estimated from a fringe encoded image of a scene by means of a reconstructor algorithm. A spatial domain methodology involving the idea of a unit cell and slide approach is applied to the reconstructor model developed using Mueller calculus. The validation of this methodology and effectiveness compared to a discrete approach is demonstrated with suitable examples. The pixel size required to sample the fringes and minimum unit cell size required for reconstruction are investigated using condition numbers. The importance of the PSF of fore-optics (telescope) used in imaging the object is investigated and analyzed using a point source imaging example and a Nyquist criteria is presented. Reconstruction of fringe modulated images in the presence of noise involves choosing an

  3. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  4. Curvelet Based Offline Analysis of SEM Images

    PubMed Central

    Shirazi, Syed Hamad; Haq, Nuhman ul; Hayat, Khizar; Naz, Saeeda; Haque, Ihsan ul

    2014-01-01

    Manual offline analysis, of a scanning electron microscopy (SEM) image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM). The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD) calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm. PMID:25089617

  5. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. PMID:20713305

  6. Fourier analysis: from cloaking to imaging

    NASA Astrophysics Data System (ADS)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  7. Measuring toothbrush interproximal penetration using image analysis

    NASA Astrophysics Data System (ADS)

    Hayworth, Mark S.; Lyons, Elizabeth K.

    1994-09-01

    An image analysis method of measuring the effectiveness of a toothbrush in reaching the interproximal spaces of teeth is described. Artificial teeth are coated with a stain that approximates real plaque and then brushed with a toothbrush on a brushing machine. The teeth are then removed and turned sideways so that the interproximal surfaces can be imaged. The areas of stain that have been removed within masked regions that define the interproximal regions are measured and reported. These areas correspond to the interproximal areas of the tooth reached by the toothbrush bristles. The image analysis method produces more precise results (10-fold decrease in standard deviation) in a fraction (22%) of the time as compared to our prior visual grading method.

  8. Morphometry of spermatozoa using semiautomatic image analysis.

    PubMed Central

    Jagoe, J R; Washbrook, N P; Hudson, E A

    1986-01-01

    Human sperm heads were detected and tracked using semiautomatic image analysis. Measurements of size and shape on two specimens from each of 26 men showed that the major component of variability both within and between subjects was the number of small elongated sperm heads. Variability of the computed features between subjects was greater than that between samples from the same subject. PMID:3805320

  9. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  10. Using Image Analysis to Build Reading Comprehension

    ERIC Educational Resources Information Center

    Brown, Sarah Drake; Swope, John

    2010-01-01

    Content area reading remains a primary concern of history educators. In order to better prepare students for encounters with text, the authors propose the use of two image analysis strategies tied with a historical theme to heighten student interest in historical content and provide a basis for improved reading comprehension.

  11. Expert system for imaging spectrometer analysis results

    NASA Technical Reports Server (NTRS)

    Borchardt, Gary C.

    1985-01-01

    Information on an expert system for imaging spectrometer analysis results is outlined. Implementation requirements, the Simple Tool for Automated Reasoning (STAR) program that provides a software environment for the development and operation of rule-based expert systems, STAR data structures, and rule-based identification of surface materials are among the topics outlined.

  12. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  13. Good relationships between computational image analysis and radiological physics

    SciTech Connect

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-30

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  14. Good relationships between computational image analysis and radiological physics

    NASA Astrophysics Data System (ADS)

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-01

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  15. Frequency domain analysis of knock images

    NASA Astrophysics Data System (ADS)

    Qi, Yunliang; He, Xin; Wang, Zhi; Wang, Jianxin

    2014-12-01

    High speed imaging-based knock analysis has mainly focused on time domain information, e.g. the spark triggered flame speed, the time when end gas auto-ignition occurs and the end gas flame speed after auto-ignition. This study presents a frequency domain analysis on the knock images recorded using a high speed camera with direct photography in a rapid compression machine (RCM). To clearly visualize the pressure wave oscillation in the combustion chamber, the images were high-pass-filtered to extract the luminosity oscillation. The luminosity spectrum was then obtained by applying fast Fourier transform (FFT) to three basic colour components (red, green and blue) of the high-pass-filtered images. Compared to the pressure spectrum, the luminosity spectra better identify the resonant modes of pressure wave oscillation. More importantly, the resonant mode shapes can be clearly visualized by reconstructing the images based on the amplitudes of luminosity spectra at the corresponding resonant frequencies, which agree well with the analytical solutions for mode shapes of gas vibration in a cylindrical cavity.

  16. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  17. Multiscale likelihood analysis and image reconstruction

    NASA Astrophysics Data System (ADS)

    Willett, Rebecca M.; Nowak, Robert D.

    2003-11-01

    The nonparametric multiscale polynomial and platelet methods presented here are powerful new tools for signal and image denoising and reconstruction. Unlike traditional wavelet-based multiscale methods, these methods are both well suited to processing Poisson or multinomial data and capable of preserving image edges. At the heart of these new methods lie multiscale signal decompositions based on polynomials in one dimension and multiscale image decompositions based on what the authors call platelets in two dimensions. Platelets are localized functions at various positions, scales and orientations that can produce highly accurate, piecewise linear approximations to images consisting of smooth regions separated by smooth boundaries. Polynomial and platelet-based maximum penalized likelihood methods for signal and image analysis are both tractable and computationally efficient. Polynomial methods offer near minimax convergence rates for broad classes of functions including Besov spaces. Upper bounds on the estimation error are derived using an information-theoretic risk bound based on squared Hellinger loss. Simulations establish the practical effectiveness of these methods in applications such as density estimation, medical imaging, and astronomy.

  18. Recent advances in morphological cell image analysis.

    PubMed

    Chen, Shengyong; Zhao, Mingzhu; Wu, Guang; Yao, Chunyan; Zhang, Jianwei

    2012-01-01

    This paper summarizes the recent advances in image processing methods for morphological cell analysis. The topic of morphological analysis has received much attention with the increasing demands in both bioinformatics and biomedical applications. Among many factors that affect the diagnosis of a disease, morphological cell analysis and statistics have made great contributions to results and effects for a doctor. Morphological cell analysis finds the cellar shape, cellar regularity, classification, statistics, diagnosis, and so forth. In the last 20 years, about 1000 publications have reported the use of morphological cell analysis in biomedical research. Relevant solutions encompass a rather wide application area, such as cell clumps segmentation, morphological characteristics extraction, 3D reconstruction, abnormal cells identification, and statistical analysis. These reports are summarized in this paper to enable easy referral to suitable methods for practical solutions. Representative contributions and future research trends are also addressed. PMID:22272215

  19. Analysis of imaging system performance capabilities

    NASA Astrophysics Data System (ADS)

    Haim, Harel; Marom, Emanuel

    2013-06-01

    Present performance analysis of optical imaging systems based on results obtained with classic one-dimensional (1D) resolution targets (such as the USAF resolution chart) are significantly different than those obtained with a newly proposed 2D target [1]. We hereby prove such claim and show how the novel 2D target should be used for correct characterization of optical imaging systems in terms of resolution and contrast. We apply thereafter the consequences of these observations on the optimal design of some two-dimensional barcode structures.

  20. Autonomous Image Analysis for Future Mars Missions

    NASA Technical Reports Server (NTRS)

    Gulick, V. C.; Morris, R. L.; Ruzon, M. A.; Bandari, E.; Roush, T. L.

    1999-01-01

    To explore high priority landing sites and to prepare for eventual human exploration, future Mars missions will involve rovers capable of traversing tens of kilometers. However, the current process by which scientists interact with a rover does not scale to such distances. Specifically, numerous command cycles are required to complete even simple tasks, such as, pointing the spectrometer at a variety of nearby rocks. In addition, the time required by scientists to interpret image data before new commands can be given and the limited amount of data that can be downlinked during a given command cycle constrain rover mobility and achievement of science goals. Experience with rover tests on Earth supports these concerns. As a result, traverses to science sites as identified in orbital images would require numerous science command cycles over a period of many weeks, months or even years, perhaps exceeding rover design life and other constraints. Autonomous onboard science analysis can address these problems in two ways. First, it will allow the rover to preferentially transmit "interesting" images, defined as those likely to have higher science content. Second, the rover will be able to anticipate future commands. For example, a rover might autonomously acquire and return spectra of "interesting" rocks along with a high-resolution image of those rocks in addition to returning the context images in which they were detected. Such approaches, coupled with appropriate navigational software, help to address both the data volume and command cycle bottlenecks that limit both rover mobility and science yield. We are developing fast, autonomous algorithms to enable such intelligent on-board decision making by spacecraft. Autonomous algorithms developed to date have the ability to identify rocks and layers in a scene, locate the horizon, and compress multi-spectral image data. We are currently investigating the possibility of reconstructing a 3D surface from a sequence of images

  1. Morphological analysis of infrared images for waterjets

    NASA Astrophysics Data System (ADS)

    Gong, Yuxin; Long, Aifang

    2013-03-01

    High-speed waterjet has been widely used in industries and been investigated as a model of free shearing turbulence. This paper presents an investigation involving the flow visualization of high speed water jet, the noise reduction of the raw thermogram using a high-pass morphological filter ? and a median filter; the image enhancement using white top-hat filter; and the image segmentation using the multiple thresholding method. The image processing results by the designed morphological filters, ? - top-hat, were proved being ideal for further quantitative and in-depth analysis and can be used as a new morphological filter bank that may be of general implications for the analogous work

  2. Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox

    PubMed Central

    Lacerda, Luis Miguel; Ferreira, Hugo Alexandre

    2015-01-01

    Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA) toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity. Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old) with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also. Results. It was observed both a high inter-hemispheric symmetry

  3. Pain related inflammation analysis using infrared images

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  4. Image analysis for measuring rod network properties

    NASA Astrophysics Data System (ADS)

    Kim, Dongjae; Choi, Jungkyu; Nam, Jaewook

    2015-12-01

    In recent years, metallic nanowires have been attracting significant attention as next-generation flexible transparent conductive films. The performance of films depends on the network structure created by nanowires. Gaining an understanding of their structure, such as connectivity, coverage, and alignment of nanowires, requires the knowledge of individual nanowires inside the microscopic images taken from the film. Although nanowires are flexible up to a certain extent, they are usually depicted as rigid rods in many analysis and computational studies. Herein, we propose a simple and straightforward algorithm based on the filtering in the frequency domain for detecting the rod-shape objects inside binary images. The proposed algorithm uses a specially designed filter in the frequency domain to detect image segments, namely, the connected components aligned in a certain direction. Those components are post-processed to be combined under a given merging rule in a single rod object. In this study, the microscopic properties of the rod networks relevant to the analysis of nanowire networks were measured for investigating the opto-electric performance of transparent conductive films and their alignment distribution, length distribution, and area fraction. To verify and find the optimum parameters for the proposed algorithm, numerical experiments were performed on synthetic images with predefined properties. By selecting proper parameters, the algorithm was used to investigate silver nanowire transparent conductive films fabricated by the dip coating method.

  5. Scalable histopathological image analysis via active learning.

    PubMed

    Zhu, Yan; Zhang, Shaoting; Liu, Wei; Metaxas, Dimitris N

    2014-01-01

    Training an effective and scalable system for medical image analysis usually requires a large amount of labeled data, which incurs a tremendous annotation burden for pathologists. Recent progress in active learning can alleviate this issue, leading to a great reduction on the labeling cost without sacrificing the predicting accuracy too much. However, most existing active learning methods disregard the "structured information" that may exist in medical images (e.g., data from individual patients), and make a simplifying assumption that unlabeled data is independently and identically distributed. Both may not be suitable for real-world medical images. In this paper, we propose a novel batch-mode active learning method which explores and leverages such structured information in annotations of medical images to enforce diversity among the selected data, therefore maximizing the information gain. We formulate the active learning problem as an adaptive submodular function maximization problem subject to a partition matroid constraint, and further present an efficient greedy algorithm to achieve a good solution with a theoretically proven bound. We demonstrate the efficacy of our algorithm on thousands of histopathological images of breast microscopic tissues. PMID:25320821

  6. Multiresolution simulated annealing for brain image analysis

    NASA Astrophysics Data System (ADS)

    Loncaric, Sven; Majcenic, Zoran

    1999-05-01

    Analysis of biomedical images is an important step in quantification of various diseases such as human spontaneous intracerebral brain hemorrhage (ICH). In particular, the study of outcome in patients having ICH requires measurements of various ICH parameters such as hemorrhage volume and their change over time. A multiresolution probabilistic approach for segmentation of CT head images is presented in this work. This method views the segmentation problem as a pixel labeling problem. In this application the labels are: background, skull, brain tissue, and ICH. The proposed method is based on the Maximum A-Posteriori (MAP) estimation of the unknown pixel labels. The MAP method maximizes the a-posterior probability of segmented image given the observed (input) image. Markov random field (MRF) model has been used for the posterior distribution. The MAP estimation of the segmented image has been determined using the simulated annealing (SA) algorithm. The SA algorithm is used to minimize the energy function associated with MRF posterior distribution function. A multiresolution SA (MSA) has been developed to speed up the annealing process. MSA is presented in detail in this work. A knowledge-based classification based on the brightness, size, shape and relative position toward other regions is performed at the end of the procedure. The regions are identified as background, skull, brain, ICH and calcifications.

  7. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, Brian A.

    1987-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficiently by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  8. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  9. Quantitative image analysis of celiac disease

    PubMed Central

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-01-01

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients. PMID:25759524

  10. Imaging Brain Dynamics Using Independent Component Analysis

    PubMed Central

    Jung, Tzyy-Ping; Makeig, Scott; McKeown, Martin J.; Bell, Anthony J.; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    The analysis of electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings is important both for basic brain research and for medical diagnosis and treatment. Independent component analysis (ICA) is an effective method for removing artifacts and separating sources of the brain signals from these recordings. A similar approach is proving useful for analyzing functional magnetic resonance brain imaging (fMRI) data. In this paper, we outline the assumptions underlying ICA and demonstrate its application to a variety of electrical and hemodynamic recordings from the human brain. PMID:20824156

  11. Theoretical analysis of multispectral image segmentation criteria.

    PubMed

    Kerfoot, I B; Bresler, Y

    1999-01-01

    Markov random field (MRF) image segmentation algorithms have been extensively studied, and have gained wide acceptance. However, almost all of the work on them has been experimental. This provides a good understanding of the performance of existing algorithms, but not a unified explanation of the significance of each component. To address this issue, we present a theoretical analysis of several MRF image segmentation criteria. Standard methods of signal detection and estimation are used in the theoretical analysis, which quantitatively predicts the performance at realistic noise levels. The analysis is decoupled into the problems of false alarm rate, parameter selection (Neyman-Pearson and receiver operating characteristics), detection threshold, expected a priori boundary roughness, and supervision. Only the performance inherent to a criterion, with perfect global optimization, is considered. The analysis indicates that boundary and region penalties are very useful, while distinct-mean penalties are of questionable merit. Region penalties are far more important for multispectral segmentation than for greyscale. This observation also holds for Gauss-Markov random fields, and for many separable within-class PDFs. To validate the analysis, we present optimization algorithms for several criteria. Theoretical and experimental results agree fairly well. PMID:18267494

  12. Image analysis of Renaissance copperplate prints

    NASA Astrophysics Data System (ADS)

    Hedges, S. Blair

    2008-02-01

    From the fifteenth to the nineteenth centuries, prints were a common form of visual communication, analogous to photographs. Copperplate prints have many finely engraved black lines which were used to create the illusion of continuous tone. Line densities generally are 100-2000 lines per square centimeter and a print can contain more than a million total engraved lines 20-300 micrometers in width. Because hundreds to thousands of prints were made from a single copperplate over decades, variation among prints can have historical value. The largest variation is plate-related, which is the thinning of lines over successive editions as a result of plate polishing to remove time-accumulated corrosion. Thinning can be quantified with image analysis and used to date undated prints and books containing prints. Print-related variation, such as over-inking of the print, is a smaller but significant source. Image-related variation can introduce bias if images were differentially illuminated or not in focus, but improved imaging technology can limit this variation. The Print Index, the percentage of an area composed of lines, is proposed as a primary measure of variation. Statistical methods also are proposed for comparing and identifying prints in the context of a print database.

  13. Multispectral laser imaging for advanced food analysis

    NASA Astrophysics Data System (ADS)

    Senni, L.; Burrascano, P.; Ricci, M.

    2016-07-01

    A hardware-software apparatus for food inspection capable of realizing multispectral NIR laser imaging at four different wavelengths is herein discussed. The system was designed to operate in a through-transmission configuration to detect the presence of unwanted foreign bodies inside samples, whether packed or unpacked. A modified Lock-In technique was employed to counterbalance the significant signal intensity attenuation due to transmission across the sample and to extract the multispectral information more efficiently. The NIR laser wavelengths used to acquire the multispectral images can be varied to deal with different materials and to focus on specific aspects. In the present work the wavelengths were selected after a preliminary analysis to enhance the image contrast between foreign bodies and food in the sample, thus identifying the location and nature of the defects. Experimental results obtained from several specimens, with and without packaging, are presented and the multispectral image processing as well as the achievable spatial resolution of the system are discussed.

  14. Nursing image: an evolutionary concept analysis.

    PubMed

    Rezaei-Adaryani, Morteza; Salsali, Mahvash; Mohammadi, Eesa

    2012-12-01

    A long-term challenge to the nursing profession is the concept of image. In this study, we used the Rodgers' evolutionary concept analysis approach to analyze the concept of nursing image (NI). The aim of this concept analysis was to clarify the attributes, antecedents, consequences, and implications associated with the concept. We performed an integrative internet-based literature review to retrieve English literature published from 1980-2011. Findings showed that NI is a multidimensional, all-inclusive, paradoxical, dynamic, and complex concept. The media, invisibility, clothing style, nurses' behaviors, gender issues, and professional organizations are the most important antecedents of the concept. We found that NI is pivotal in staff recruitment and nursing shortage, resource allocation to nursing, nurses' job performance, workload, burnout and job dissatisfaction, violence against nurses, public trust, and salaries available to nurses. An in-depth understanding of the NI concept would assist nurses to eliminate negative stereotypes and build a more professional image for the nurse and the profession. PMID:23343236

  15. Shannon information and ROC analysis in imaging.

    PubMed

    Clarkson, Eric; Cushing, Johnathan B

    2015-07-01

    Shannon information (SI) and the ideal-observer receiver operating characteristic (ROC) curve are two different methods for analyzing the performance of an imaging system for a binary classification task, such as the detection of a variable signal embedded within a random background. In this work we describe a new ROC curve, the Shannon information receiver operator curve (SIROC), that is derived from the SI expression for a binary classification task. We then show that the ideal-observer ROC curve and the SIROC have many properties in common, and are equivalent descriptions of the optimal performance of an observer on the task. This equivalence is described mathematically by an integral transform that maps the ideal-observer ROC curve onto the SIROC. This then leads to an integral transform relating the minimum probability of error, as a function of the odds against a signal, to the conditional entropy, as a function of the same variable. This last relation then gives us the complete mathematical equivalence between ideal-observer ROC analysis and SI analysis of the classification task for a given imaging system. We also find that there is a close relationship between the area under the ideal-observer ROC curve, which is often used as a figure of merit for imaging systems and the area under the SIROC. Finally, we show that the relationships between the two curves result in new inequalities relating SI to ROC quantities for the ideal observer. PMID:26367158

  16. Simple Low Level Features for Image Analysis

    NASA Astrophysics Data System (ADS)

    Falcoz, Paolo

    As human beings, we perceive the world around us mainly through our eyes, and give what we see the status of “reality”; as such we historically tried to create ways of recording this reality so we could augment or extend our memory. From early attempts in photography like the image produced in 1826 by the French inventor Nicéphore Niépce (Figure 2.1) to the latest high definition camcorders, the number of recorded pieces of reality increased exponentially, posing the problem of managing all that information. Most of the raw video material produced today has lost its memory augmentation function, as it will hardly ever be viewed by any human; pervasive CCTVs are an example. They generate an enormous amount of data each day, but there is not enough “human processing power” to view them. Therefore the need for effective automatic image analysis tools is great, and a lot effort has been put in it, both from the academia and the industry. In this chapter, a review of some of the most important image analysis tools are presented.

  17. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  18. PAMS photo image retrieval prototype alternatives analysis

    SciTech Connect

    Conner, M.L.

    1996-04-30

    Photography and Audiovisual Services uses a system called the Photography and Audiovisual Management System (PAMS) to perform order entry and billing services. The PAMS system utilizes Revelation Technologies database management software, AREV. Work is currently in progress to link the PAMS AREV system to a Microsoft SQL Server database engine to provide photograph indexing and query capabilities. The link between AREV and SQLServer will use a technique called ``bonding.`` This photograph imaging subsystem will interface to the PAMS system and handle the image capture and retrieval portions of the project. The intent of this alternatives analysis is to examine the software and hardware alternatives available to meet the requirements for this project, and identify a cost-effective solution.

  19. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  20. Analysis of katabatic flow using infrared imaging

    NASA Astrophysics Data System (ADS)

    Grudzielanek, M.; Cermak, J.

    2013-12-01

    We present a novel high-resolution IR method which is developed, tested and used for the analysis of katabatic flow. Modern thermal imaging systems allow for the recording of infrared picture sequences and thus the monitoring and analysis of dynamic processes. In order to identify, visualize and analyze dynamic air flow using infrared imaging, a highly reactive 'projection' surface is needed along the air flow. Here, a design for these types of analysis is proposed and evaluated. Air flow situations with strong air temperature gradients and fluctuations, such as katabatic flow, are particularly suitable for this new method. The method is applied here to analyze nocturnal cold air flows on gentle slopes. In combination with traditional methods the vertical and temporal dynamics of cold air flow are analyzed. Several assumptions on cold air flow dynamics can be confirmed explicitly for the first time. By observing the cold air flow in terms of frequency, size and period of the cold air fluctuations, drops are identified and organized in a newly derived classification system of cold air flow phases. In addition, new flow characteristics are detected, like sharp cold air caps and turbulence inside the drops. Vertical temperature gradients inside cold air drops and their temporal evolution are presented in high resolution Hovmöller-type diagrams and sequenced time lapse infrared videos.

  1. Machine learning for medical images analysis.

    PubMed

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods. PMID:27374127

  2. Research on automatic human chromosome image analysis

    NASA Astrophysics Data System (ADS)

    Ming, Delie; Tian, Jinwen; Liu, Jian

    2007-11-01

    Human chromosome karyotyping is one of the essential tasks in cytogenetics, especially in genetic syndrome diagnoses. In this thesis, an automatic procedure is introduced for human chromosome image analysis. According to different status of touching and overlapping chromosomes, several segmentation methods are proposed to achieve the best results. Medial axis is extracted by the middle point algorithm. Chromosome band is enhanced by the algorithm based on multiscale B-spline wavelets, extracted by average gray profile, gradient profile and shape profile, and calculated by the WDD (Weighted Density Distribution) descriptors. The multilayer classifier is used in classification. Experiment results demonstrate that the algorithms perform well.

  3. Image reconstruction from Pulsed Fast Neutron Analysis

    NASA Astrophysics Data System (ADS)

    Bendahan, Joseph; Feinstein, Leon; Keeley, Doug; Loveman, Rob

    1999-06-01

    Pulsed Fast Neutron Analysis (PFNA) has been demonstrated to detect drugs and explosives in trucks and large cargo containers. PFNA uses a collimated beam of nanosecond-pulsed fast neutrons that interact with the cargo contents to produce gamma rays characteristic to their elemental composition. By timing the arrival of the emitted radiation to an array of gamma-ray detectors a three-dimensional elemental density map or image of the cargo is created. The process to determine the elemental densities is complex and requires a number of steps. The first step consists of extracting from the characteristic gamma-ray spectra the counts associated with the elements of interest. Other steps are needed to correct for physical quantities such as gamma-ray production cross sections and angular distributions. The image processing includes also phenomenological corrections that take into account the neutron attenuation through the cargo, and the attenuation of the gamma rays from the point they were generated to the gamma-ray detectors. Additional processing is required to map the elemental densities from the data acquisition system of coordinates to a rectilinear system. This paper describes the image processing used to compute the elemental densities from the counts observed in the gamma-ray detectors.

  4. Image reconstruction from Pulsed Fast Neutron Analysis

    SciTech Connect

    Bendahan, Joseph; Feinstein, Leon; Keeley, Doug; Loveman, Rob

    1999-06-10

    Pulsed Fast Neutron Analysis (PFNA) has been demonstrated to detect drugs and explosives in trucks and large cargo containers. PFNA uses a collimated beam of nanosecond-pulsed fast neutrons that interact with the cargo contents to produce gamma rays characteristic to their elemental composition. By timing the arrival of the emitted radiation to an array of gamma-ray detectors a three-dimensional elemental density map or image of the cargo is created. The process to determine the elemental densities is complex and requires a number of steps. The first step consists of extracting from the characteristic gamma-ray spectra the counts associated with the elements of interest. Other steps are needed to correct for physical quantities such as gamma-ray production cross sections and angular distributions. The image processing includes also phenomenological corrections that take into account the neutron attenuation through the cargo, and the attenuation of the gamma rays from the point they were generated to the gamma-ray detectors. Additional processing is required to map the elemental densities from the data acquisition system of coordinates to a rectilinear system. This paper describes the image processing used to compute the elemental densities from the counts observed in the gamma-ray detectors.

  5. Soil Surface Roughness through Image Analysis

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Saa-Requejo, A.; Valencia, J. L.; Moratiel, R.; Paz-Gonzalez, A.; Agro-Environmental Modeling

    2011-12-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on several factors being one of them surface micro-topography, usually quantified trough soil surface roughness (SSR). SSR greatly affects surface sealing and runoff generation, yet little information is available about the effect of roughness on the spatial distribution of runoff and on flow concentration. The methods commonly used to measure SSR involve measuring point elevation using a pin roughness meter or laser, both of which are labor intensive and expensive. Lately a simple and inexpensive technique based on percentage of shadow in soil surface image has been developed to determine SSR in the field in order to obtain measurement for wide spread application. One of the first steps in this technique is image de-noising and thresholding to estimate the percentage of black pixels in the studied area. In this work, a series of soil surface images have been analyzed applying several de-noising wavelet analysis and thresholding algorithms to study the variation in percentage of shadows and the shadows size distribution. Funding provided by Spanish Ministerio de Ciencia e Innovación (MICINN) through project no. AGL2010- 21501/AGR and by Xunta de Galicia through project no INCITE08PXIB1621 are greatly appreciated.

  6. Noise analysis in laser speckle contrast imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Shuai; Chen, Yu; Dunn, Andrew K.; Boas, David A.

    2010-02-01

    Laser speckle contrast imaging (LSCI) is becoming an established method for full-field imaging of blood flow dynamics in animal models. A reliable quantitative model with comprehensive noise analysis is necessary to fully utilize this technique in biomedical applications and clinical trials. In this study, we investigated several major noise sources in LSCI: periodic physiology noise, shot noise and statistical noise. (1) We observed periodic physiology noise in our experiments and found that its sources consist principally of motions induced by heart beats and/or ventilation. (2) We found that shot noise caused an offset of speckle contrast (SC) values, and this offset is directly related to the incident light intensity. (3) A mathematical model of statistical noise was also developed. The model indicated that statistical noise in speckle contrast imaging is related to the SC values and the total number of pixels used in the SC calculation. Our experimental results are consistent with theoretical predications, as well as with other published works.

  7. Difference Image Analysis of Galactic Microlensing. I. Data Analysis

    SciTech Connect

    Alcock, C.; Allsman, R. A.; Alves, D.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Griest, K.

    1999-08-20

    This is a preliminary report on the application of Difference Image Analysis (DIA) to Galactic bulge images. The aim of this analysis is to increase the sensitivity to the detection of gravitational microlensing. We discuss how the DIA technique simplifies the process of discovering microlensing events by detecting only objects that have variable flux. We illustrate how the DIA technique is not limited to detection of so-called ''pixel lensing'' events but can also be used to improve photometry for classical microlensing events by removing the effects of blending. We will present a method whereby DIA can be used to reveal the true unblended colors, positions, and light curves of microlensing events. We discuss the need for a technique to obtain the accurate microlensing timescales from blended sources and present a possible solution to this problem using the existing Hubble Space Telescope color-magnitude diagrams of the Galactic bulge and LMC. The use of such a solution with both classical and pixel microlensing searches is discussed. We show that one of the major causes of systematic noise in DIA is differential refraction. A technique for removing this systematic by effectively registering images to a common air mass is presented. Improvements to commonly used image differencing techniques are discussed. (c) 1999 The American Astronomical Society.

  8. Analysis of image quality based on perceptual preference

    NASA Astrophysics Data System (ADS)

    Xue, Liqin; Hua, Yuning; Zhao, Guangzhou; Qi, Yaping

    2007-11-01

    This paper deals with image quality analysis considering the impact of psychological factors involved in assessment. The attributes of image quality requirement were partitioned according to the visual perception characteristics and the preference of image quality were obtained by the factor analysis method. The features of image quality which support the subjective preference were identified, The adequacy of image is evidenced to be the top requirement issues to the display image quality improvement. The approach will be beneficial to the research of the image quality subjective quantitative assessment method.

  9. Wavelet Analysis of Space Solar Telescope Images

    NASA Astrophysics Data System (ADS)

    Zhu, Xi-An; Jin, Sheng-Zhen; Wang, Jing-Yu; Ning, Shu-Nian

    2003-12-01

    The scientific satellite SST (Space Solar Telescope) is an important research project strongly supported by the Chinese Academy of Sciences. Every day, SST acquires 50 GB of data (after processing) but only 10GB can be transmitted to the ground because of limited time of satellite passage and limited channel volume. Therefore, the data must be compressed before transmission. Wavelets analysis is a new technique developed over the last 10 years, with great potential of application. We start with a brief introduction to the essential principles of wavelet analysis, and then describe the main idea of embedded zerotree wavelet coding, used for compressing the SST images. The results show that this coding is adequate for the job.

  10. The Scientific Image in Behavior Analysis.

    PubMed

    Keenan, Mickey

    2016-05-01

    Throughout the history of science, the scientific image has played a significant role in communication. With recent developments in computing technology, there has been an increase in the kinds of opportunities now available for scientists to communicate in more sophisticated ways. Within behavior analysis, though, we are only just beginning to appreciate the importance of going beyond the printing press to elucidate basic principles of behavior. The aim of this manuscript is to stimulate appreciation of both the role of the scientific image and the opportunities provided by a quick response code (QR code) for enhancing the functionality of the printed page. I discuss the limitations of imagery in behavior analysis ("Introduction"), and I show examples of what can be done with animations and multimedia for teaching philosophical issues that arise when teaching about private events ("Private Events 1 and 2"). Animations are also useful for bypassing ethical issues when showing examples of challenging behavior ("Challenging Behavior"). Each of these topics can be accessed only by scanning the QR code provided. This contingency has been arranged to help the reader embrace this new technology. In so doing, I hope to show its potential for going beyond the limitations of the printing press. PMID:27606187

  11. Monotonic correlation analysis of image quality measures for image fusion

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Burks, Stephen D.; Moore, Richard K.; Nguyen, Quang

    2008-04-01

    The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality (IQ) measures to automatically determine the best image fusion algorithm for a particular task. This work introduces a novel monotonic correlation coefficient to investigate how well possible IQ features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by traditional correlation values.

  12. Analysis of autostereoscopic three-dimensional images using multiview wavelets.

    PubMed

    Saveljev, Vladimir; Palchikova, Irina

    2016-08-10

    We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images. PMID:27534470

  13. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  14. MaZda--a software package for image texture analysis.

    PubMed

    Szczypiński, Piotr M; Strzelecki, Michał; Materka, Andrzej; Klepaczko, Artur

    2009-04-01

    MaZda, a software package for 2D and 3D image texture analysis is presented. It provides a complete path for quantitative analysis of image textures, including computation of texture features, procedures for feature selection and extraction, algorithms for data classification, various data visualization and image segmentation tools. Initially, MaZda was aimed at analysis of magnetic resonance image textures. However, it revealed its effectiveness in analysis of other types of textured images, including X-ray and camera images. The software was utilized by numerous researchers in diverse applications. It was proven to be an efficient and reliable tool for quantitative image analysis, even in more accurate and objective medical diagnosis. MaZda was also successfully used in food industry to assess food product quality. MaZda can be downloaded for public use from the Institute of Electronics, Technical University of Lodz webpage. PMID:18922598

  15. Thermal image analysis for detecting facemask leakage

    NASA Astrophysics Data System (ADS)

    Dowdall, Jonathan B.; Pavlidis, Ioannis T.; Levine, James

    2005-03-01

    Due to the modern advent of near ubiquitous accessibility to rapid international transportation the epidemiologic trends of highly communicable diseases can be devastating. With the recent emergence of diseases matching this pattern, such as Severe Acute Respiratory Syndrome (SARS), an area of overt concern has been the transmission of infection through respiratory droplets. Approved facemasks are typically effective physical barriers for preventing the spread of viruses through droplets, but breaches in a mask"s integrity can lead to an elevated risk of exposure and subsequent infection. Quality control mechanisms in place during the manufacturing process insure that masks are defect free when leaving the factory, but there remains little to detect damage caused by transportation or during usage. A system that could monitor masks in real-time while they were in use would facilitate a more secure environment for treatment and screening. To fulfill this necessity, we have devised a touchless method to detect mask breaches in real-time by utilizing the emissive properties of the mask in the thermal infrared spectrum. Specifically, we use a specialized thermal imaging system to detect minute air leakage in masks based on the principles of heat transfer and thermodynamics. The advantage of this passive modality is that thermal imaging does not require contact with the subject and can provide instant visualization and analysis. These capabilities can prove invaluable for protecting personnel in scenarios with elevated levels of transmission risk such as hospital clinics, border check points, and airports.

  16. Vision-sensing image analysis for GTAW process control

    SciTech Connect

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  17. Image analysis by integration of disparate information

    NASA Technical Reports Server (NTRS)

    Lemoigne, Jacqueline

    1993-01-01

    Image analysis often starts with some preliminary segmentation which provides a representation of the scene needed for further interpretation. Segmentation can be performed in several ways, which are categorized as pixel based, edge-based, and region-based. Each of these approaches are affected differently by various factors, and the final result may be improved by integrating several or all of these methods, thus taking advantage of their complementary nature. In this paper, we propose an approach that integrates pixel-based and edge-based results by utilizing an iterative relaxation technique. This approach has been implemented on a massively parallel computer and tested on some remotely sensed imagery from the Landsat-Thematic Mapper (TM) sensor.

  18. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    SciTech Connect

    STOYANOVA,R.S.; OCHS,M.F.; BROWN,T.R.; ROONEY,W.D.; LI,X.; LEE,J.H.; SPRINGER,C.S.

    1999-05-22

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content.

  19. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. PMID:26556680

  20. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    NASA Astrophysics Data System (ADS)

    Liang, Jianming; Järvi, Timo; Kiuru, Aaro; Kormano, Martti; Svedström, Erkki

    2003-12-01

    The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT) and nuclear medicine (NM) studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  1. High resolution ultraviolet imaging spectrometer for latent image analysis.

    PubMed

    Lyu, Hang; Liao, Ningfang; Li, Hongsong; Wu, Wenmin

    2016-03-21

    In this work, we present a close-range ultraviolet imaging spectrometer with high spatial resolution, and reasonably high spectral resolution. As the transmissive optical components cause chromatic aberration in the ultraviolet (UV) spectral range, an all-reflective imaging scheme is introduced to promote the image quality. The proposed instrument consists of an oscillating mirror, a Cassegrain objective, a Michelson structure, an Offner relay, and a UV enhanced CCD. The finished spectrometer has a spatial resolution of 29.30μm on the target plane; the spectral scope covers both near and middle UV band; and can obtain approximately 100 wavelength samples over the range of 240~370nm. The control computer coordinates all the components of the instrument and enables capturing a series of images, which can be reconstructed into an interferogram datacube. The datacube can be converted into a spectrum datacube, which contains spectral information of each pixel with many wavelength samples. A spectral calibration is carried out by using a high pressure mercury discharge lamp. A test run demonstrated that this interferometric configuration can obtain high resolution spectrum datacube. The pattern recognition algorithm is introduced to analyze the datacube and distinguish the latent traces from the base materials. This design is particularly good at identifying the latent traces in the application field of forensic imaging. PMID:27136837

  2. A framework for joint image-and-shape analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Tannenbaum, Allen; Bouix, Sylvain

    2014-03-01

    Techniques in medical image analysis are many times used for the comparison or regression on the intensities of images. In general, the domain of the image is a given Cartesian grids. Shape analysis, on the other hand, studies the similarities and differences among spatial objects of arbitrary geometry and topology. Usually, there is no function defined on the domain of shapes. Recently, there has been a growing needs for defining and analyzing functions defined on the shape space, and a coupled analysis on both the shapes and the functions defined on them. Following this direction, in this work we present a coupled analysis for both images and shapes. As a result, the statistically significant discrepancies in both the image intensities as well as on the underlying shapes are detected. The method is applied on both brain images for the schizophrenia and heart images for atrial fibrillation patients.

  3. Analysis of physical processes via imaging vectors

    NASA Astrophysics Data System (ADS)

    Volovodenko, V.; Efremova, N.; Efremov, V.

    2016-06-01

    Practically, all modeling processes in one way or another are random. The foremost formulated theoretical foundation embraces Markov processes, being represented in different forms. Markov processes are characterized as a random process that undergoes transitions from one state to another on a state space, whereas the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. In the Markov processes the proposition (model) of the future by no means changes in the event of the expansion and/or strong information progression relative to preceding time. Basically, modeling physical fields involves process changing in time, i.e. non-stationay processes. In this case, the application of Laplace transformation provides unjustified description complications. Transition to other possibilities results in explicit simplification. The method of imaging vectors renders constructive mathematical models and necessary transition in the modeling process and analysis itself. The flexibility of the model itself using polynomial basis leads to the possible rapid transition of the mathematical model and further analysis acceleration. It should be noted that the mathematical description permits operator representation. Conversely, operator representation of the structures, algorithms and data processing procedures significantly improve the flexibility of the modeling process.

  4. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1982-01-01

    Work done on evaluating the geometric and radiometric quality of early LANDSAT-4 sensor data is described. Band to band and channel to channel registration evaluations were carried out using a line correlator. Visual blink comparisons were run on an image display to observe band to band registration over 512 x 512 pixel blocks. The results indicate a .5 pixel line misregistration between the 1.55 to 1.75, 2.08 to 2.35 micrometer bands and the first four bands. Also a four 30M line and column misregistration of the thermal IR band was observed. Radiometric evaluation included mean and variance analysis of individual detectors and principal components analysis. Results indicate that detector bias for all bands is very close or within tolerance. Bright spots were observed in the thermal IR band on an 18 line by 128 pixel grid. No explanation for this was pursued. The general overall quality of the TM was judged to be very high.

  5. Dynamic chest image analysis: model-based pulmonary perfusion analysis with pyramid images

    NASA Astrophysics Data System (ADS)

    Liang, Jianming; Haapanen, Arto; Jaervi, Timo; Kiuru, Aaro J.; Kormano, Martti; Svedstrom, Erkki; Virkki, Raimo

    1998-07-01

    The aim of the study 'Dynamic Chest Image Analysis' is to develop computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected at different phases of the respiratory/cardiac cycles in a short period of time. We have proposed a framework for ventilation study with an explicit ventilation model based on pyramid images. In this paper, we extend the framework to pulmonary perfusion study. A perfusion model and the truncated pyramid are introduced. The perfusion model aims at extracting accurate, geographic perfusion parameters, and the truncated pyramid helps in understanding perfusion at multiple resolutions and speeding up the convergence process in optimization. Three cases are included to illustrate the experimental results.

  6. Medical Image Analysis by Cognitive Information Systems - a Review.

    PubMed

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis. PMID:27526188

  7. Image based SAR product simulation for analysis

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  8. Imaging biomarkers in multiple Sclerosis: From image analysis to population imaging.

    PubMed

    Barillot, Christian; Edan, Gilles; Commowick, Olivier

    2016-10-01

    The production of imaging data in medicine increases more rapidly than the capacity of computing models to extract information from it. The grand challenges of better understanding the brain, offering better care for neurological disorders, and stimulating new drug design will not be achieved without significant advances in computational neuroscience. The road to success is to develop a new, generic, computational methodology and to confront and validate this methodology on relevant diseases with adapted computational infrastructures. This new concept sustains the need to build new research paradigms to better understand the natural history of the pathology at the early phase; to better aggregate data that will provide the most complete representation of the pathology in order to better correlate imaging with other relevant features such as clinical, biological or genetic data. In this context, one of the major challenges of neuroimaging in clinical neurosciences is to detect quantitative signs of pathological evolution as early as possible to prevent disease progression, evaluate therapeutic protocols or even better understand and model the natural history of a given neurological pathology. Many diseases encompass brain alterations often not visible on conventional MRI sequences, especially in normal appearing brain tissues (NABT). MRI has often a low specificity for differentiating between possible pathological changes which could help in discriminating between the different pathological stages or grades. The objective of medical image analysis procedures is to define new quantitative neuroimaging biomarkers to track the evolution of the pathology at different levels. This paper illustrates this issue in one acute neuro-inflammatory pathology: Multiple Sclerosis (MS). It exhibits the current medical image analysis approaches and explains how this field of research will evolve in the next decade to integrate larger scale of information at the temporal, cellular

  9. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  10. Atomic force microscope, molecular imaging, and analysis.

    PubMed

    Chen, Shu-wen W; Teulon, Jean-Marie; Godon, Christian; Pellequer, Jean-Luc

    2016-01-01

    Image visibility is a central issue in analyzing all kinds of microscopic images. An increase of intensity contrast helps to raise the image visibility, thereby to reveal fine image features. Accordingly, a proper evaluation of results with current imaging parameters can be used for feedback on future imaging experiments. In this work, we have applied the Laplacian function of image intensity as either an additive component (Laplacian mask) or a multiplying factor (Laplacian weight) for enhancing image contrast of high-resolution AFM images of two molecular systems, an unknown protein imaged in air, provided by AFM COST Action TD1002 (http://www.afm4nanomedbio.eu/), and tobacco mosaic virus (TMV) particles imaged in liquid. Based on both visual inspection and quantitative representation of contrast measurements, we found that the Laplacian weight is more effective than the Laplacian mask for the unknown protein, whereas for the TMV system the strengthened Laplacian mask is superior to the Laplacian weight. The present results indicate that a mathematical function, as exemplified by the Laplacian function, may yield varied processing effects with different operations. To interpret the diversity of molecular structure and topology in images, an explicit expression for processing procedures should be included in scientific reports alongside instrumental setups. PMID:26224520

  11. Wndchrm – an open source utility for biological image analysis

    PubMed Central

    Shamir, Lior; Orlov, Nikita; Eckley, D Mark; Macura, Tomasz; Johnston, Josiah; Goldberg, Ilya G

    2008-01-01

    Background Biological imaging is an emerging field, covering a wide range of applications in biological and clinical research. However, while machinery for automated experimenting and data acquisition has been developing rapidly in the past years, automated image analysis often introduces a bottleneck in high content screening. Methods Wndchrm is an open source utility for biological image analysis. The software works by first extracting image content descriptors from the raw image, image transforms, and compound image transforms. Then, the most informative features are selected, and the feature vector of each image is used for classification and similarity measurement. Results Wndchrm has been tested using several publicly available biological datasets, and provided results which are favorably comparable to the performance of task-specific algorithms developed for these datasets. The simple user interface allows researchers who are not knowledgeable in computer vision methods and have no background in computer programming to apply image analysis to their data. Conclusion We suggest that wndchrm can be effectively used for a wide range of biological image analysis tasks. Using wndchrm can allow scientists to perform automated biological image analysis while avoiding the costly challenge of implementing computer vision and pattern recognition algorithms. PMID:18611266

  12. Wave-Optics Analysis of Pupil Imaging

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Bos, Brent J.

    2006-01-01

    Pupil imaging performance is analyzed from the perspective of physical optics. A multi-plane diffraction model is constructed by propagating the scalar electromagnetic field, surface by surface, along the optical path comprising the pupil imaging optical system. Modeling results are compared with pupil images collected in the laboratory. The experimental setup, although generic for pupil imaging systems in general, has application to the James Webb Space Telescope (JWST) optical system characterization where the pupil images are used as a constraint to the wavefront sensing and control process. Practical design considerations follow from the diffraction modeling which are discussed in the context of the JWST Observatory.

  13. Advanced image analysis for the preservation of cultural heritage

    NASA Astrophysics Data System (ADS)

    France, Fenella G.; Christens-Barry, William; Toth, Michael B.; Boydston, Kenneth

    2010-02-01

    The Library of Congress' Preservation Research and Testing Division has established an advanced preservation studies scientific program for research and analysis of the diverse range of cultural heritage objects in its collection. Using this system, the Library is currently developing specialized integrated research methodologies for extending preservation analytical capacities through non-destructive hyperspectral imaging of cultural objects. The research program has revealed key information to support preservation specialists, scholars and other institutions. The approach requires close and ongoing collaboration between a range of scientific and cultural heritage personnel - imaging and preservation scientists, art historians, curators, conservators and technology analysts. A research project of the Pierre L'Enfant Plan of Washington DC, 1791 had been undertaken to implement and advance the image analysis capabilities of the imaging system. Innovative imaging options and analysis techniques allow greater processing and analysis capacities to establish the imaging technique as the first initial non-invasive analysis and documentation step in all cultural heritage analyses. Mapping spectral responses, organic and inorganic data, topography semi-microscopic imaging, and creating full spectrum images have greatly extended this capacity from a simple image capture technique. Linking hyperspectral data with other non-destructive analyses has further enhanced the research potential of this image analysis technique.

  14. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  15. Characterization and analysis of infrared images

    NASA Astrophysics Data System (ADS)

    Raglin, Adrienne; Wetmore, Alan; Ligon, David

    2006-05-01

    Stokes images in the long-wave infrared (LWIR) and methods for processing polarimetric data continue to be areas of interest. Stokes images which are sensitive to geometry and material differences are acquired by measuring the polarization state of the received electromagnetic radiation. The polarimetric data from Stokes images may provide enhancements to conventional IR imagery data. It is generally agreed that polarimetric images can reveal information about objects or features within a scene that are not available through other imaging techniques. This additional information may generate different approaches to segmentation, detection, and recognition of objects or features. Previous research where horizontal and vertical polarization data is used supports the use of this type of data for image processing tasks. In this work we analyze a sample polarimetric image to show both improved segmentation of objects and derivation of their inherent 3-D geometry.

  16. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  17. Analysis of scanning probe microscope images using wavelets.

    PubMed

    Gackenheimer, C; Cayon, L; Reifenberger, R

    2006-03-01

    The utility of wavelet transforms for analysis of scanning probe images is investigated. Simulated scanning probe images are analyzed using wavelet transforms and compared to a parallel analysis using more conventional Fourier transform techniques. The wavelet method introduced in this paper is particularly useful as an image recognition algorithm to enhance nanoscale objects of a specific scale that may be present in scanning probe images. In its present form, the applied wavelet is optimal for detecting objects with rotational symmetry. The wavelet scheme is applied to the analysis of scanning probe data to better illustrate the advantages that this new analysis tool offers. The wavelet algorithm developed for analysis of scanning probe microscope (SPM) images has been incorporated into the WSxM software which is a versatile freeware SPM analysis package. PMID:16439061

  18. Image analysis for dental bone quality assessment using CBCT imaging

    NASA Astrophysics Data System (ADS)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  19. Analysis of Anechoic Chamber Testing of the Hurricane Imaging Radiometer

    NASA Technical Reports Server (NTRS)

    Fenigstein, David; Ruf, Chris; James, Mark; Simmons, David; Miller, Timothy; Buckley, Courtney

    2010-01-01

    The Hurricane Imaging Radiometer System (HIRAD) is a new airborne passive microwave remote sensor developed to observe hurricanes. HIRAD incorporates synthetic thinned array radiometry technology, which use Fourier synthesis to reconstruct images from an array of correlated antenna elements. The HIRAD system response to a point emitter has been measured in an anechoic chamber. With this data, a Fourier inversion image reconstruction algorithm has been developed. Performance analysis of the apparatus is presented, along with an overview of the image reconstruction algorithm

  20. Image and Data-analysis Tools For Paleoclimatic Reconstructions

    NASA Astrophysics Data System (ADS)

    Pozzi, M.

    It comes here proposed a directory of instruments and computer science resources chosen in order to resolve the problematic ones that regard the paleoclimatic recon- structions. They will come discussed in particular the following points: 1) Numerical analysis of paleo-data (fossils abundances, species analyses, isotopic signals, chemical-physical parameters, biological data): a) statistical analyses (uni- variate, diversity, rarefaction, correlation, ANOVA, F and T tests, Chi^2) b) multidi- mensional analyses (principal components, corrispondence, cluster analysis, seriation, discriminant, autocorrelation, spectral analysis) neural analyses (backpropagation net, kohonen feature map, hopfield net genetic algorithms) 2) Graphical analysis (visu- alization tools) of paleo-data (quantitative and qualitative fossils abundances, species analyses, isotopic signals, chemical-physical parameters): a) 2-D data analyses (graph, histogram, ternary, survivorship) b) 3-D data analyses (direct volume rendering, iso- surfaces, segmentation, surface reconstruction, surface simplification,generation of tetrahedral grids). 3) Quantitative and qualitative digital image analysis (macro and microfossils image analysis, Scanning Electron Microscope. and Optical Polarized Microscope images capture and analysis, morphometric data analysis, 3-D reconstruc- tions): a) 2D image analysis (correction of image defects, enhancement of image de- tail, converting texture and directionality to grey scale or colour differences, visual enhancement using pseudo-colour, pseudo-3D, thresholding of image features, binary image processing, measurements, stereological measurements, measuring features on a white background) b) 3D image analysis (basic stereological procedures, two dimen- sional structures; area fraction from the point count, volume fraction from the point count, three dimensional structures: surface area and the line intercept count, three dimensional microstructures; line length and the

  1. Image analysis of neuropsychological test responses

    NASA Astrophysics Data System (ADS)

    Smith, Stephen L.; Hiller, Darren L.

    1996-04-01

    This paper reports recent advances in the development of an automated approach to neuropsychological testing. High performance image analysis algorithms have been developed as part of a convenient and non-invasive computer-based system to provide an objective assessment of patient responses to figure-copying tests. Tests of this type are important in determining the neurological function of patients following stroke through evaluation of their visuo-spatial performance. Many conventional neuropsychological tests suffer from the serious drawback that subjective judgement on the part of the tester is required in the measurement of the patient's response which leads to a qualitative neuropsychological assessment that can be both inconsistent and inaccurate. Results for this automated approach are presented for three clinical populations: patients suffering right hemisphere stroke are compared with adults with no known neurological disorder and a population comprising normal school children of 11 years is presented to demonstrate the sensitivity of the technique. As well as providing a more reliable and consistent diagnosis this technique is sufficiently sensitive to monitor a patient's progress over a period of time and will provide the neuropsychologist with a practical means of evaluating the effectiveness of therapy or medication administered as part of a rehabilitation program.

  2. EVALUATION OF COLOR ALTERATION ON FABRICS BY IMAGE ANALYSIS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Evaluation of color changes is usually done manually and is often inconsistent. Image analysis provides a method in which to evaluate color-related testing that is not only simple, but also consistent. Image analysis can also be used to measure areas that were considered too large for the colorimet...

  3. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution. PMID:26554504

  4. Slide Set: reproducible image analysis and batch processing with ImageJ

    PubMed Central

    Nanes, Benjamin A.

    2015-01-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets that are common in biology. This paper introduces Slide Set, a framework for reproducible image analysis and batch processing with ImageJ. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution. PMID:26554504

  5. Image analysis for discrimination of cervical neoplasia

    NASA Astrophysics Data System (ADS)

    Pogue, Brian W.; Mycek, Mary-Ann; Harper, Diane

    2000-01-01

    Colposcopy involves visual imaging of the cervix for patients who have exhibited some prior indication of abnormality, and the major goals are to visually inspect for any malignancies and to guide biopsy sampling. Currently colposcopy equipment is being upgraded in many health care centers to incorporate digital image acquisition and archiving. These permanent images can be analyzed for characteristic features and color patterns which may enhance the specificity and objectivity of the routine exam. In this study a series of images from patients with biopsy confirmed cervical intraepithelia neoplasia stage 2/3 are compared with images from patients with biopsy confirmed immature squamous metaplasia, with the goal of determining optimal criteria for automated discrimination between them. All images were separated into their red, green, and blue channels, and comparisons were made between relative intensity, intensity variation, spatial frequencies, fractal dimension, and Euler number. This study indicates that computer-based processing of cervical images can provide some discrimination of the type of tissue features which are important for clinical evaluation, with the Euler number being the most clinically useful feature to discriminate metaplasia from neoplasia. Also there was a strong indication that morphology observed in the blue channel of the image provided more information about epithelial cell changes. Further research in this field can lead to advances in computer-aided diagnosis as well as the potential for online image enhancement in digital colposcopy.

  6. A linear mixture analysis-based compression for hyperspectral image analysis

    SciTech Connect

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  7. Analysis of airborne MAIS imaging spectrometric data for mineral exploration

    SciTech Connect

    Wang Jinnian; Zheng Lanfen; Tong Qingxi

    1996-11-01

    The high spectral resolution imaging spectrometric system made quantitative analysis and mapping of surface composition possible. The key issue will be the quantitative approach for analysis of surface parameters for imaging spectrometer data. This paper describes the methods and the stages of quantitative analysis. (1) Extracting surface reflectance from imaging spectrometer image. Lab. and inflight field measurements are conducted for calibration of imaging spectrometer data, and the atmospheric correction has also been used to obtain ground reflectance by using empirical line method and radiation transfer modeling. (2) Determining quantitative relationship between absorption band parameters from the imaging spectrometer data and chemical composition of minerals. (3) Spectral comparison between the spectra of spectral library and the spectra derived from the imagery. The wavelet analysis-based spectrum-matching techniques for quantitative analysis of imaging spectrometer data has beer, developed. Airborne MAIS imaging spectrometer data were used for analysis and the analysis results have been applied to the mineral and petroleum exploration in Tarim Basin area china. 8 refs., 8 figs.

  8. Low-cost image analysis system

    SciTech Connect

    Lassahn, G.D.

    1995-01-01

    The author has developed an Automatic Target Recognition system based on parallel processing using transputers. This approach gives a powerful, fast image processing system at relatively low cost. This system scans multi-sensor (e.g., several infrared bands) image data to find any identifiable target, such as physical object or a type of vegetation.

  9. Analysis of Images from Experiments Investigating Fragmentation of Materials

    SciTech Connect

    Kamath, C; Hurricane, O

    2007-09-10

    Image processing techniques have been used extensively to identify objects of interest in image data and extract representative characteristics for these objects. However, this can be a challenge due to the presence of noise in the images and the variation across images in a dataset. When the number of images to be analyzed is large, the algorithms used must also be relatively insensitive to the choice of parameters and lend themselves to partial or full automation. This not only avoids manual analysis which can be time consuming and error-prone, but also makes the analysis reproducible, thus enabling comparisons between images which have been processed in an identical manner. In this paper, we describe our approach to extracting features for objects of interest in experimental images. Focusing on the specific problem of fragmentation of materials, we show how we can extract statistics for the fragments and the gaps between them.

  10. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  11. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. PMID:27182830

  12. Dehazing method through polarimetric imaging and multi-scale analysis

    NASA Astrophysics Data System (ADS)

    Cao, Lei; Shao, Xiaopeng; Liu, Fei; Wang, Lin

    2015-05-01

    An approach for haze removal utilizing polarimetric imaging and multi-scale analysis has been developed to solve one problem that haze weather weakens the interpretation of remote sensing because of the poor visibility and short detection distance of haze images. On the one hand, the polarization effects of the airlight and the object radiance in the imaging procedure has been considered. On the other hand, one fact that objects and haze possess different frequency distribution properties has been emphasized. So multi-scale analysis through wavelet transform has been employed to make it possible for low frequency components that haze presents and high frequency coefficients that image details or edges occupy are processed separately. According to the measure of the polarization feather by Stokes parameters, three linear polarized images (0°, 45°, and 90°) have been taken on haze weather, then the best polarized image min I and the worst one max I can be synthesized. Afterwards, those two polarized images contaminated by haze have been decomposed into different spatial layers with wavelet analysis, and the low frequency images have been processed via a polarization dehazing algorithm while high frequency components manipulated with a nonlinear transform. Then the ultimate haze-free image can be reconstructed by inverse wavelet reconstruction. Experimental results verify that the dehazing method proposed in this study can strongly promote image visibility and increase detection distance through haze for imaging warning and remote sensing systems.

  13. Spatio-spectral image analysis using classical and neural algorithms

    SciTech Connect

    Roberts, S.; Gisler, G.R.; Theiler, J.

    1996-12-31

    Remote imaging at high spatial resolution has a number of environmental, industrial, and military applications. Analysis of high-resolution multi-spectral images usually involves either spectral analysis of single pixels in a multi- or hyper-spectral image or spatial analysis of multi-pixels in a panchromatic or monochromatic image. Although insufficient for some pattern recognition applications individually, the combination of spatial and spectral analytical techniques may allow the identification of more complex signatures that might not otherwise be manifested in the individual spatial or spectral domains. We report on some preliminary investigation of unsupervised classification methodologies (using both ``classical`` and ``neural`` algorithms) to identify potentially revealing features in these images. We apply dimension-reduction preprocessing to the images, duster, and compare the clusterings obtained by different algorithms. Our classification results are analyzed both visually and with a suite of objective, quantitative measures.

  14. Dynamic infrared imaging in identification of breast cancer tissue with combined image processing and frequency analysis.

    PubMed

    Joro, R; Lääperi, A-L; Soimakallio, S; Järvenpää, R; Kuukasjärvi, T; Toivonen, T; Saaristo, R; Dastidar, P

    2008-01-01

    Five combinations of image-processing algorithms were applied to dynamic infrared (IR) images of six breast cancer patients preoperatively to establish optimal enhancement of cancer tissue before frequency analysis. mid-wave photovoltaic (PV) IR cameras with 320x254 and 640x512 pixels were used. The signal-to-noise ratio and the specificity for breast cancer were evaluated with the image-processing combinations from the image series of each patient. Before image processing and frequency analysis the effect of patient movement was minimized with a stabilization program developed and tested in the study by stabilizing image slices using surface markers set as measurement points on the skin of the imaged breast. A mathematical equation for superiority value was developed for comparison of the key ratios of the image-processing combinations. The ability of each combination to locate the mammography finding of breast cancer in each patient was compared. Our results show that data collected with a 640x512-pixel mid-wave PV camera applying image-processing methods optimizing signal-to-noise ratio, morphological image processing and linear image restoration before frequency analysis possess the greatest superiority value, showing the cancer area most clearly also in the match centre of the mammography estimation. PMID:18666012

  15. Vector sparse representation of color image using quaternion matrix analysis.

    PubMed

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain. PMID:25643407

  16. An approach to multi-temporal MODIS image analysis using image classification and segmentation

    NASA Astrophysics Data System (ADS)

    Senthilnath, J.; Bajpai, Shivesh; Omkar, S. N.; Diwakar, P. G.; Mani, V.

    2012-11-01

    This paper discusses an approach for river mapping and flood evaluation based on multi-temporal time series analysis of satellite images utilizing pixel spectral information for image classification and region-based segmentation for extracting water-covered regions. Analysis of MODIS satellite images is applied in three stages: before flood, during flood and after flood. Water regions are extracted from the MODIS images using image classification (based on spectral information) and image segmentation (based on spatial information). Multi-temporal MODIS images from "normal" (non-flood) and flood time-periods are processed in two steps. In the first step, image classifiers such as Support Vector Machines (SVM) and Artificial Neural Networks (ANN) separate the image pixels into water and non-water groups based on their spectral features. The classified image is then segmented using spatial features of the water pixels to remove the misclassified water. From the results obtained, we evaluate the performance of the method and conclude that the use of image classification (SVM and ANN) and region-based image segmentation is an accurate and reliable approach for the extraction of water-covered regions.

  17. Multiple sclerosis medical image analysis and information management.

    PubMed

    Liu, Lifeng; Meier, Dominik; Polgar-Turcsanyi, Mariann; Karkocha, Pawel; Bakshi, Rohit; Guttmann, Charles R G

    2005-01-01

    Magnetic resonance imaging (MRI) has become a central tool for patient management, as well as research, in multiple sclerosis (MS). Measurements of disease burden and activity derived from MRI through quantitative image analysis techniques are increasingly being used. There are many complexities and challenges in building computerized processing pipelines to ensure efficiency, reproducibility, and quality control for MRI scans from MS patients. Such paradigms require advanced image processing and analysis technologies, as well as integrated database management systems to ensure the most utility for clinical and research purposes. This article reviews pipelines available for quantitative clinical MRI research in MS, including image segmentation, registration, time-series analysis, performance validation, visualization techniques, and advanced medical imaging software packages. To address the complex demands of the sequential processes, the authors developed a workflow management system that uses a centralized database and distributed computing system for image processing and analysis. The implementation of their system includes a web-form-based Oracle database application for information management and event dispatching, and multiple modules for image processing and analysis. The seamless integration of processing pipelines with the database makes it more efficient for users to navigate complex, multistep analysis protocols, reduces the user's learning curve, reduces the time needed for combining and activating different computing modules, and allows for close monitoring for quality-control purposes. The authors' system can be extended to general applications in clinical trials and to routine processing for image-based clinical research. PMID:16385023

  18. Development of a quantitative autoradiography image analysis system

    SciTech Connect

    Hoffman, T.J.; Volkert, W.A.; Holmes R.A.

    1986-03-01

    A low cost image analysis system suitable for quantitative autoradiography (QAR) analysis has been developed. Autoradiographs can be digitized using a conventional Newvicon television camera interfaced to an IBM-XT microcomputer. Software routines for image digitization and capture permit the acquisition of thresholded or windowed images with graphic overlays that can be stored on storage devices. Image analysis software performs all background and non-linearity corrections prior to display as black/white or pseudocolor images. The relationship of pixel intensity to a standard radionuclide concentration allows the production of quantitative maps of tissue radiotracer concentrations. An easily modified subroutine is provided for adaptation to use appropriate operational equations when parameters such as regional cerebral blood flow or regional cerebral glucose metabolism are under investigation. This system could provide smaller research laboratories with the capability of QAR analysis at relatively low cost.

  19. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  20. Automated thermal mapping techniques using chromatic image analysis

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  1. Rapid analysis and exploration of fluorescence microscopy images.

    PubMed

    Pavie, Benjamin; Rajaram, Satwik; Ouyang, Austin; Altschuler, Jason M; Steininger, Robert J; Wu, Lani F; Altschuler, Steven J

    2014-01-01

    Despite rapid advances in high-throughput microscopy, quantitative image-based assays still pose significant challenges. While a variety of specialized image analysis tools are available, most traditional image-analysis-based workflows have steep learning curves (for fine tuning of analysis parameters) and result in long turnaround times between imaging and analysis. In particular, cell segmentation, the process of identifying individual cells in an image, is a major bottleneck in this regard. Here we present an alternate, cell-segmentation-free workflow based on PhenoRipper, an open-source software platform designed for the rapid analysis and exploration of microscopy images. The pipeline presented here is optimized for immunofluorescence microscopy images of cell cultures and requires minimal user intervention. Within half an hour, PhenoRipper can analyze data from a typical 96-well experiment and generate image profiles. Users can then visually explore their data, perform quality control on their experiment, ensure response to perturbations and check reproducibility of replicates. This facilitates a rapid feedback cycle between analysis and experiment, which is crucial during assay optimization. This protocol is useful not just as a first pass analysis for quality control, but also may be used as an end-to-end solution, especially for screening. The workflow described here scales to large data sets such as those generated by high-throughput screens, and has been shown to group experimental conditions by phenotype accurately over a wide range of biological systems. The PhenoBrowser interface provides an intuitive framework to explore the phenotypic space and relate image properties to biological annotations. Taken together, the protocol described here will lower the barriers to adopting quantitative analysis of image based screens. PMID:24686220

  2. Research of second harmonic generation images based on texture analysis

    NASA Astrophysics Data System (ADS)

    Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan

    2014-09-01

    Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.

  3. Multistage hierarchy for fast image analysis

    NASA Astrophysics Data System (ADS)

    Grudin, Maxim A.; Harvey, David M.; Timchenko, Leonid I.

    1996-12-01

    In this paper, a novel approach is proposed, which allows for an efficient reduction of the amount of visual data required for representing structural information in the image. This is a multistage architecture which investigates partial correlations between structural image components. Mathematical description of the multistage hierarchical processing is provided, together with the network architecture. Initially the image is partitioned to be processed in parallel channels. In each channel, the structural components are transformed and subsequently separated, depending on their structural significance, to be then combined with the components from other channels for further processing. The output result is represented as a pattern vector, whose components are computed one at a time to allow the quickest possible response. The input gray- scale image is transformed before the processing begins, so that each pixel contains information about the spatial structure of its neighborhood. The most correlated information is extracted first, making the algorithm tolerant to minor structural changes.

  4. Introducing PLIA: Planetary Laboratory for Image Analysis

    NASA Astrophysics Data System (ADS)

    Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.

    2005-08-01

    We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.

  5. Histology image analysis for carcinoma detection and grading

    PubMed Central

    He, Lei; Long, L. Rodney; Antani, Sameer; Thoma, George R.

    2012-01-01

    This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems. PMID:22436890

  6. MR brain image analysis in dementia: From quantitative imaging biomarkers to ageing brain models and imaging genetics.

    PubMed

    Niessen, Wiro J

    2016-10-01

    MR brain image analysis has constantly been a hot topic research area in medical image analysis over the past two decades. In this article, it is discussed how the field developed from the construction of tools for automatic quantification of brain morphology, function, connectivity and pathology, to creating models of the ageing brain in normal ageing and disease, and tools for integrated analysis of imaging and genetic data. The current and future role of the field in improved understanding of the development of neurodegenerative disease is discussed, and its potential for aiding in early and differential diagnosis and prognosis of different types of dementia. For the latter, the use of reference imaging data and reference models derived from large clinical and population imaging studies, and the application of machine learning techniques on these reference data, are expected to play a key role. PMID:27344937

  7. Localised manifold learning for cardiac image analysis

    NASA Astrophysics Data System (ADS)

    Bhatia, Kanwal K.; Price, Anthony N.; Hajnal, Jo V.; Rueckert, Daniel

    2012-02-01

    Manifold learning is increasingly being used to discover the underlying structure of medical image data. Traditional approaches operate on whole images with a single measure of similarity used to compare entire images. In this way, information on the locality of differences is lost and smaller trends may be masked by dominant global differences. In this paper, we propose the use of multiple local manifolds to analyse regions of images without any prior knowledge of which regions are important. Localised manifolds are created by partitioning images into regular subsections with a manifold constructed for each patch. We propose a framework for incorporating information from the neighbours of each patch to calculate a coherent embedding. This generates a simultaneous dimensionality reduction of all patches and results in the creation of embeddings which are spatially-varying. Additionally, a hierarchical method is presented to enable a multi-scale embedding solution. We use this to extract spatially-varying respiratory and cardiac motions from cardiac MRI. Although there is a complex interplay between these motions, we show how they can be separated on a regional basis. We demonstrate the utility of the localised joint embedding over a global embedding of whole images and over embedding individual patches independently.

  8. Radar images analysis for scattering surfaces characterization

    NASA Astrophysics Data System (ADS)

    Piazza, Enrico

    1998-10-01

    According to the different problems and techniques related to the detection and recognition of airplanes and vehicles moving on the Airport surface, the present work mainly deals with the processing of images gathered by a high-resolution radar sensor. The radar images used to test the investigated algorithms are relative to sequence of images obtained in some field experiments carried out by the Electronic Engineering Department of the University of Florence. The radar is the Ka band radar operating in the'Leonardo da Vinci' Airport in Fiumicino (Rome). The images obtained from the radar scan converter are digitized and putted in x, y, (pixel) co- ordinates. For a correct matching of the images, these are corrected in true geometrical co-ordinates (meters) on the basis of fixed points on an airport map. Correlating the airplane 2-D multipoint template with actual radar images, the value of the signal in the points involved in the template can be extracted. Results for a lot of observation show a typical response for the main section of the fuselage and the wings. For the fuselage, the back-scattered echo is low at the prow, became larger near the center on the aircraft and than it decrease again toward the tail. For the wings the signal is growing with a pretty regular slope from the fuselage to the tips, where the signal is the strongest.

  9. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  10. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  11. Unsupervised analysis of small animal dynamic Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Spinelli, Antonello E.; Boschi, Federico

    2011-12-01

    Clustering analysis (CA) and principal component analysis (PCA) were applied to dynamic Cerenkov luminescence images (dCLI). In order to investigate the performances of the proposed approaches, two distinct dynamic data sets obtained by injecting mice with 32P-ATP and 18F-FDG were acquired using the IVIS 200 optical imager. The k-means clustering algorithm has been applied to dCLI and was implemented using interactive data language 8.1. We show that cluster analysis allows us to obtain good agreement between the clustered and the corresponding emission regions like the bladder, the liver, and the tumor. We also show a good correspondence between the time activity curves of the different regions obtained by using CA and manual region of interest analysis on dCLIT and PCA images. We conclude that CA provides an automatic unsupervised method for the analysis of preclinical dynamic Cerenkov luminescence image data.

  12. Digital Image Analysis for DETCHIP(®) Code Determination.

    PubMed

    Lyon, Marcus; Wilson, Mark V; Rouhier, Kerry A; Symonsbergen, David J; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E; Sikich, Sharmin M; Jackson, Abby

    2012-08-01

    DETECHIP(®) is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP(®) used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP(®). Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940

  13. Anima: modular workflow system for comprehensive image data analysis.

    PubMed

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  14. Anima: Modular Workflow System for Comprehensive Image Data Analysis

    PubMed Central

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  15. Basic research planning in mathematical pattern recognition and image analysis

    NASA Technical Reports Server (NTRS)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  16. An Analysis of the Magneto-Optic Imaging System

    NASA Technical Reports Server (NTRS)

    Nath, Shridhar

    1996-01-01

    The Magneto-Optic Imaging system is being used for the detection of defects in airframes and other aircraft structures. The system has been successfully applied to detecting surface cracks, but has difficulty in the detection of sub-surface defects such as corrosion. The intent of the grant was to understand the physics of the MOI better, in order to use it effectively for detecting corrosion and for classifying surface defects. Finite element analysis, image classification, and image processing are addressed.

  17. Uncooled LWIR imaging: applications and market analysis

    NASA Astrophysics Data System (ADS)

    Takasawa, Satomi

    2015-05-01

    The evolution of infrared (IR) imaging sensor technology for defense market has played an important role in developing commercial market, as dual use of the technology has expanded. In particular, technologies of both reduction in pixel pitch and vacuum package have drastically evolved in the area of uncooled Long-Wave IR (LWIR; 8-14 μm wavelength region) imaging sensor, increasing opportunity to create new applications. From the macroscopic point of view, the uncooled LWIR imaging market is divided into two areas. One is a high-end market where uncooled LWIR imaging sensor with sensitivity as close to that of cooled one as possible is required, while the other is a low-end market which is promoted by miniaturization and reduction in price. Especially, in the latter case, approaches towards consumer market have recently appeared, such as applications of uncooled LWIR imaging sensors to night visions for automobiles and smart phones. The appearance of such a kind of commodity surely changes existing business models. Further technological innovation is necessary for creating consumer market, and there will be a room for other companies treating components and materials such as lens materials and getter materials and so on to enter into the consumer market.

  18. Continuous-wave terahertz scanning image resolution analysis and restoration

    NASA Astrophysics Data System (ADS)

    Li, Qi; Yin, Qiguo; Yao, Rui; Ding, Shenghui; Wang, Qi

    2010-03-01

    Resolution of continuous-wave (CW) terahertz scanning image is limited by many factors among which the aperture effect of finite focus diameter is very important. We have investigated the factors that affect terahertz (THz) image resolution in details through theory analysis and simulation. On the other hand, in order to enhance THz image resolution, Richardson-Lucy algorithm has been introduced as a promising approach to improve image details. By analyzing the imaging theory, it is proposed that intensity distribution function of actual THz laser focal spot can be approximatively used as point spread function (PSF) in the restoration algorithm. The focal spot image could be obtained by applying the pyroelectric camera, and mean filtering result of the focal spot image is used as the PSF. Simulation and experiment show that the algorithm implemented is comparatively effective.

  19. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  20. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  1. Optical image acquisition system for colony analysis

    NASA Astrophysics Data System (ADS)

    Wang, Weixing; Jin, Wenbiao

    2006-02-01

    For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems since they belong to a new technology product. One of the main problems is image acquisition. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. A digital camera in the top of the box connected to a PC computer with a USB cable, all the camera functions are controlled by the computer.

  2. System Matrix Analysis for Computed Tomography Imaging

    PubMed Central

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  3. System Matrix Analysis for Computed Tomography Imaging.

    PubMed

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  4. [Analysis of bone tissues by intravital imaging].

    PubMed

    Mizuno, Hiroki; Yamashita, Erika; Ishii, Masaru

    2016-05-01

    In recent years,"the fluorescent imaging techniques"has made rapid advances, it has become possible to observe the dynamics of living cells in individuals or tissues. It has been considered that it is extremely difficult to observe the living bone marrow directly because bone marrow is surrounded by a hard calcareous. But now, we established a method for observing the cells constituting the bone marrow of living mice in real time by the use of the intravital two-photon imaging system. In this article, we show the latest data and the reports about the hematopoietic stem cells and the leukemia cells by using the intravital imaging techniques, and also discuss its further application. PMID:27117619

  5. Texture Analysis for Classification of Risat-Ii Images

    NASA Astrophysics Data System (ADS)

    Chakraborty, D.; Thakur, S.; Jeyaram, A.; Krishna Murthy, Y. V. N.; Dadhwal, V. K.

    2012-08-01

    RISAT-II or Radar Imaging satellite - II is a microwave-imaging satellite lunched by ISRO to take images of the earth during day and night as well as all weather condition. This satellite enhances the ISRO's capability for disaster management application together with forestry, agricultural, urban and oceanographic applications. The conventional pixel based classification technique cannot classify these type of images since it do not take into account the texture information of the image. This paper presents a method to classify the high-resolution RISAT-II microwave images based on texture analysis. It suppress the speckle noise from the microwave image before analysis the texture of the image since speckle is essentially a form of noise, which degrades the quality of an image; make interpretation (visual or digital) more difficult. A local adaptive median filter is developed that uses local statistics to detect the speckle noise of microwave image and to replace it with a local median value. Local Binary Pattern (LBP) operator is proposed to measure the texture around each pixel of the speckle suppressed microwave image. It considers a series of circles (2D) centered on the pixel with incremental radius values and the intersected pixels on the perimeter of the circles of radius r (where r = 1, 3 and 5) are used for measuring the LBP of the center pixel. The significance of LBP is that it measure the texture around each pixel of the image and computationally simple. ISODATA method is used to cluster the transformed LBP image. The proposed method adequately classifies RISAT-II X band microwave images without human intervention.

  6. Four challenges in medical image analysis from an industrial perspective.

    PubMed

    Weese, Jürgen; Lorenz, Cristian

    2016-10-01

    Today's medical imaging systems produce a huge amount of images containing a wealth of information. However, the information is hidden in the data and image analysis algorithms are needed to extract it, to make it readily available for medical decisions and to enable an efficient work flow. Advances in medical image analysis over the past 20 years mean there are now many algorithms and ideas available that allow to address medical image analysis tasks in commercial solutions with sufficient performance in terms of accuracy, reliability and speed. At the same time new challenges have arisen. Firstly, there is a need for more generic image analysis technologies that can be efficiently adapted for a specific clinical task. Secondly, efficient approaches for ground truth generation are needed to match the increasing demands regarding validation and machine learning. Thirdly, algorithms for analyzing heterogeneous image data are needed. Finally, anatomical and organ models play a crucial role in many applications, and algorithms to construct patient-specific models from medical images with a minimum of user interaction are needed. These challenges are complementary to the on-going need for more accurate, more reliable and faster algorithms, and dedicated algorithmic solutions for specific applications. PMID:27344939

  7. Disability in Physical Education Textbooks: An Analysis of Image Content

    ERIC Educational Resources Information Center

    Taboas-Pais, Maria Ines; Rey-Cao, Ana

    2012-01-01

    The aim of this paper is to show how images of disability are portrayed in physical education textbooks for secondary schools in Spain. The sample was composed of 3,316 images published in 36 textbooks by 10 publishing houses. A content analysis was carried out using a coding scheme based on categories employed in other similar studies and adapted…

  8. An Online Image Analysis Tool for Science Education

    ERIC Educational Resources Information Center

    Raeside, L.; Busschots, B.; Waddington, S.; Keating, J. G.

    2008-01-01

    This paper describes an online image analysis tool developed as part of an iterative, user-centered development of an online Virtual Learning Environment (VLE) called the Education through Virtual Experience (EVE) Portal. The VLE provides a Web portal through which schoolchildren and their teachers create scientific proposals, retrieve images and…

  9. Ringed impact craters on Venus: An analysis from Magellan images

    NASA Technical Reports Server (NTRS)

    Alexopoulos, Jim S.; Mckinnon, William B.

    1992-01-01

    We have analyzed cycle 1 Magellan images covering approximately 90 percent of the venusian surface and have identified 55 unequivocal peak-ring craters and multiringed impact basins. This comprehensive study (52 peak-ring craters and at least 3 multiringed impact basins) complements our earlier independent analysis of Arecibo and Venera images and initial Magellan data and that of the Magellan team.

  10. Higher Education Institution Image: A Correspondence Analysis Approach.

    ERIC Educational Resources Information Center

    Ivy, Jonathan

    2001-01-01

    Investigated how marketing is used to convey higher education institution type image in the United Kingdom and South Africa. Using correspondence analysis, revealed the unique positionings created by old and new universities and technikons in these countries. Also identified which marketing tools they use in conveying their image. (EV)

  11. Geopositioning Precision Analysis of Multiple Image Triangulation Using Lro Nac Lunar Images

    NASA Astrophysics Data System (ADS)

    Di, K.; Xu, B.; Liu, B.; Jia, M.; Liu, Z.

    2016-06-01

    This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images at the Chang'e-3(CE-3) landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs) of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  12. Analysis of PETT images in psychiatric disorders

    SciTech Connect

    Brodie, J.D.; Gomez-Mont, F.; Volkow, N.D.; Corona, J.F.; Wolf, A.P.; Wolkin, A.; Russell, J.A.G.; Christman, D.; Jaeger, J.

    1983-01-01

    A quantitative method is presented for studying the pattern of metabolic activity in a set of Positron Emission Transaxial Tomography (PETT) images. Using complex Fourier coefficients as a feature vector for each image, cluster, principal components, and discriminant function analyses are used to empirically describe metabolic differences between control subjects and patients with DSM III diagnosis for schizophrenia or endogenous depression. We also present data on the effects of neuroleptic treatment on the local cerebral metabolic rate of glucose utilization (LCMRGI) in a group of chronic schizophrenics using the region of interest approach. 15 references, 4 figures, 3 tables.

  13. SLAR image interpretation keys for geographic analysis

    NASA Technical Reports Server (NTRS)

    Coiner, J. C.

    1972-01-01

    A means for side-looking airborne radar (SLAR) imagery to become a more widely used data source in geoscience and agriculture is suggested by providing interpretation keys as an easily implemented interpretation model. Interpretation problems faced by the researcher wishing to employ SLAR are specifically described, and the use of various types of image interpretation keys to overcome these problems is suggested. With examples drawn from agriculture and vegetation mapping, direct and associate dichotomous image interpretation keys are discussed and methods of constructing keys are outlined. Initial testing of the keys, key-based automated decision rules, and the role of the keys in an information system for agriculture are developed.

  14. Challenges and opportunities for quantifying roots and rhizosphere interactions through imaging and image analysis.

    PubMed

    Downie, H F; Adu, M O; Schmidt, S; Otten, W; Dupuy, L X; White, P J; Valentine, T A

    2015-07-01

    The morphology of roots and root systems influences the efficiency by which plants acquire nutrients and water, anchor themselves and provide stability to the surrounding soil. Plant genotype and the biotic and abiotic environment significantly influence root morphology, growth and ultimately crop yield. The challenge for researchers interested in phenotyping root systems is, therefore, not just to measure roots and link their phenotype to the plant genotype, but also to understand how the growth of roots is influenced by their environment. This review discusses progress in quantifying root system parameters (e.g. in terms of size, shape and dynamics) using imaging and image analysis technologies and also discusses their potential for providing a better understanding of root:soil interactions. Significant progress has been made in image acquisition techniques, however trade-offs exist between sample throughput, sample size, image resolution and information gained. All of these factors impact on downstream image analysis processes. While there have been significant advances in computation power, limitations still exist in statistical processes involved in image analysis. Utilizing and combining different imaging systems, integrating measurements and image analysis where possible, and amalgamating data will allow researchers to gain a better understanding of root:soil interactions. PMID:25211059

  15. Spatially Weighted Principal Component Analysis for Imaging Classification

    PubMed Central

    Guo, Ruixin; Ahn, Mihye; Zhu, Hongtu

    2014-01-01

    The aim of this paper is to develop a supervised dimension reduction framework, called Spatially Weighted Principal Component Analysis (SWPCA), for high dimensional imaging classification. Two main challenges in imaging classification are the high dimensionality of the feature space and the complex spatial structure of imaging data. In SWPCA, we introduce two sets of novel weights including global and local spatial weights, which enable a selective treatment of individual features and incorporation of the spatial structure of imaging data and class label information. We develop an e cient two-stage iterative SWPCA algorithm and its penalized version along with the associated weight determination. We use both simulation studies and real data analysis to evaluate the finite-sample performance of our SWPCA. The results show that SWPCA outperforms several competing principal component analysis (PCA) methods, such as supervised PCA (SPCA), and other competing methods, such as sparse discriminant analysis (SDA). PMID:26089629

  16. Electron Microscopy and Image Analysis for Selected Materials

    NASA Technical Reports Server (NTRS)

    Williams, George

    1999-01-01

    This particular project was completed in collaboration with the metallurgical diagnostics facility. The objective of this research had four major components. First, we required training in the operation of the environmental scanning electron microscope (ESEM) for imaging of selected materials including biological specimens. The types of materials range from cyanobacteria and diatoms to cloth, metals, sand, composites and other materials. Second, to obtain training in surface elemental analysis technology using energy dispersive x-ray (EDX) analysis, and in the preparation of x-ray maps of these same materials. Third, to provide training for the staff of the metallurgical diagnostics and failure analysis team in the area of image processing and image analysis technology using NIH Image software. Finally, we were to assist in the sample preparation, observing, imaging, and elemental analysis for Mr. Richard Hoover, one of NASA MSFC's solar physicists and Marshall's principal scientist for the agency-wide virtual Astrobiology Institute. These materials have been collected from various places around the world including the Fox Tunnel in Alaska, Siberia, Antarctica, ice core samples from near Lake Vostoc, thermal vents in the ocean floor, hot springs and many others. We were successful in our efforts to obtain high quality, high resolution images of various materials including selected biological ones. Surface analyses (EDX) and x-ray maps were easily prepared with this technology. We also discovered and used some applications for NIH Image software in the metallurgical diagnostics facility.

  17. Automated Analysis of Mammography Phantom Images

    NASA Astrophysics Data System (ADS)

    Brooks, Kenneth Wesley

    The present work stems from the hypothesis that humans are inconsistent when making subjective analyses of images and that human decisions for moderately complex images may be performed by a computer with complete objectivity, once a human acceptance level has been established. The following goals were established to test the hypothesis: (1) investigate observer variability within the standard mammographic phantom evaluation process; (2) evaluate options for high-resolution image digitization and utilize the most appropriate technology for standard mammographic phantom film digitization; (3) develop a machine-based vision system for evaluating standard mammographic phantom images to eliminate effects of human variabilities; and (4) demonstrate the completed system's performance against human observers for accreditation and for manufacturing quality control of standard mammographic phantom images. The following methods and procedures were followed to achieve the goals of the research: (1) human variabilities in the American College of Radiology accreditation process were simulated by observer studies involving 30 medical physicists and these were compared to the same number of diagnostic radiologists and untrained control group of observers; (2) current digitization technologies were presented and performance test procedures were developed; three devices were tested which represented commercially available high, intermediate and low-end contrast and spatial resolution capabilities; (3) optimal image processing schemes were applied and tested which performed low, intermediate and high-level computer vision tasks; and (4) the completed system's performance was tested against human observers for accreditation and for manufacturing quality control of standard mammographic phantom images. The results from application of the procedures were as follows: (1) the simulated American College of Radiology mammography accreditation program phantom evaluation process demonstrated

  18. Image analysis in dual modality tomography for material classification

    NASA Astrophysics Data System (ADS)

    Basarab-Horwath, I.; Daniels, A. T.; Green, R. G.

    2001-08-01

    A dual modality tomographic system is described for material classification in a simulated multi-component flow regime. It combines two tomographic modalities, electrical current and light, to image the interrogated area. Derived image parameters did not allow material classification. PCA analysis was performed on this data set producing a new parameter set, which allowed material classification. This procedure reduces the dimensionality of the data set and also offers a pre-processing technique prior to analysis by another classifier.

  19. Non-Imaging Software/Data Analysis Requirements

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The analysis software needs of the non-imaging planetary data user are discussed. Assumptions as to the nature of the planetary science data centers where the data are physically stored are advanced, the scope of the non-imaging data is outlined, and facilities that users are likely to need to define and access data are identified. Data manipulation and analysis needs and display graphics are discussed.

  20. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  1. Analysis on correlation imaging based on fractal interpolation

    NASA Astrophysics Data System (ADS)

    Li, Bailing; Zhang, Wenwen; Chen, Qian; Gu, Guohua

    2015-10-01

    One fractal interpolation algorithm has been discussed in detail and the statistical self-similarity characteristics of light field have been analized in correlated experiment. For the correlation imaging experiment in condition of low sampling frequent, an image analysis approach based on fractal interpolation algorithm is proposed. This approach aims to improve the resolution of original image which contains a fewer number of pixels and highlight the image contour feature which is fuzzy. By using this method, a new model for the light field has been established. For the case of different moments of the intensity in the receiving plane, the local field division also has been established and then the iterated function system based on the experimental data set can be obtained by choosing the appropriate compression ratio under a scientific error estimate. On the basis of the iterative function, an explicit fractal interpolation function expression is given out in this paper. The simulation results show that the correlation image reconstructed by fractal interpolation has good approximations to the original image. The number of pixels of image after interpolation is significantly increased. This method will effectively solve the difficulty of image pixel deficiency and significantly improved the outline of objects in the image. The rate of deviation as the parameter has been adopted in the paper in order to evaluate objectively the effect of the algorithm. To sum up, fractal interpolation method proposed in this paper not only keeps the overall image but also increases the local information of the original image.

  2. Image analysis of dye stained patterns in soils

    NASA Astrophysics Data System (ADS)

    Bogner, Christina; Trancón y Widemann, Baltasar; Lange, Holger

    2013-04-01

    Quality of surface water and groundwater is directly affected by flow processes in the unsaturated zone. In general, it is difficult to measure or model water flow. Indeed, parametrization of hydrological models is problematic and often no unique solution exists. To visualise flow patterns in soils directly dye tracer studies can be done. These experiments provide images of stained soil profiles and their evaluation demands knowledge in hydrology as well as in image analysis and statistics. First, these photographs are converted to binary images classifying the pixels in dye stained and non-stained ones. Then, some feature extraction is necessary to discern relevant hydrological information. In our study we propose to use several index functions to extract different (ideally complementary) features. We associate each image row with a feature vector (i.e. a certain number of image function values) and use these features to cluster the image rows to identify similar image areas. Because images of stained profiles might have different reasonable clusterings, we calculate multiple consensus clusterings. An expert can explore these different solutions and base his/her interpretation of predominant flow mechanisms on quantitative (objective) criteria. The complete workflow from reading-in binary images to final clusterings has been implemented in the free R system, a language and environment for statistical computing. The calculation of image indices is part of our own package Indigo, manipulation of binary images, clustering and visualization of results are done using either build-in facilities in R, additional R packages or the LATEX system.

  3. The ImageJ ecosystem: An open platform for biomedical image analysis.

    PubMed

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. PMID:26153368

  4. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  5. Hyperspectral image analysis using artificial color

    NASA Astrophysics Data System (ADS)

    Fu, Jian; Caulfield, H. John; Wu, Dongsheng; Tadesse, Wubishet

    2010-03-01

    By definition, HSC (HyperSpectral Camera) images are much richer in spectral data than, say, a COTS (Commercial-Off-The-Shelf) color camera. But data are not information. If we do the task right, useful information can be derived from the data in HSC images. Nature faced essentially the identical problem. The incident light is so complex spectrally that measuring it with high resolution would provide far more data than animals can handle in real time. Nature's solution was to do irreversible POCS (Projections Onto Convex Sets) to achieve huge reductions in data with minimal reduction in information. Thus we can arrange for our manmade systems to do what nature did - project the HSC image onto two or more broad, overlapping curves. The task we have undertaken in the last few years is to develop this idea that we call Artificial Color. What we report here is the use of the measured HSC image data projected onto two or three convex, overlapping, broad curves in analogy with the sensitivity curves of human cone cells. Testing two quite different HSC images in that manner produced the desired result: good discrimination or segmentation that can be done very simply and hence are likely to be doable in real time with specialized computers. Using POCS on the HSC data to reduce the processing complexity produced excellent discrimination in those two cases. For technical reasons discussed here, the figures of merit for the kind of pattern recognition we use is incommensurate with the figures of merit of conventional pattern recognition. We used some force fitting to make a comparison nevertheless, because it shows what is also obvious qualitatively. In our tasks our method works better.

  6. Analysis of Multipath Pixels in SAR Images

    NASA Astrophysics Data System (ADS)

    Zhao, J. W.; Wu, J. C.; Ding, X. L.; Zhang, L.; Hu, F. M.

    2016-06-01

    As the received radar signal is the sum of signal contributions overlaid in one single pixel regardless of the travel path, the multipath effect should be seriously tackled as the multiple bounce returns are added to direct scatter echoes which leads to ghost scatters. Most of the existing solution towards the multipath is to recover the signal propagation path. To facilitate the signal propagation simulation process, plenty of aspects such as sensor parameters, the geometry of the objects (shape, location, orientation, mutual position between adjacent buildings) and the physical parameters of the surface (roughness, correlation length, permittivity)which determine the strength of radar signal backscattered to the SAR sensor should be given in previous. However, it's not practical to obtain the highly detailed object model in unfamiliar area by field survey as it's a laborious work and time-consuming. In this paper, SAR imaging simulation based on RaySAR is conducted at first aiming at basic understanding of multipath effects and for further comparison. Besides of the pre-imaging simulation, the product of the after-imaging, which refers to radar images is also taken into consideration. Both Cosmo-SkyMed ascending and descending SAR images of Lupu Bridge in Shanghai are used for the experiment. As a result, the reflectivity map and signal distribution map of different bounce level are simulated and validated by 3D real model. The statistic indexes such as the phase stability, mean amplitude, amplitude dispersion, coherence and mean-sigma ratio in case of layover are analyzed with combination of the RaySAR output.

  7. Independent component analysis applications on THz sensing and imaging

    NASA Astrophysics Data System (ADS)

    Balci, Soner; Maleski, Alexander; Nascimento, Matheus Mello; Philip, Elizabath; Kim, Ju-Hyung; Kung, Patrick; Kim, Seongsin M.

    2016-05-01

    We report Independent Component Analysis (ICA) technique applied to THz spectroscopy and imaging to achieve a blind source separation. A reference water vapor absorption spectrum was extracted via ICA, then ICA was utilized on a THz spectroscopic image in order to clean the absorption of water molecules from each pixel. For this purpose, silica gel was chosen as the material of interest for its strong water absorption. The resulting image clearly showed that ICA effectively removed the water content in the detected signal allowing us to image the silica gel beads distinctively even though it was totally embedded in water before ICA was applied.

  8. Synthetic aperture sonar imaging using joint time-frequency analysis

    NASA Astrophysics Data System (ADS)

    Wang, Genyuan; Xia, Xiang-Gen

    1999-03-01

    The non-ideal motion of the hydrophone usually induces the aperture error of the synthetic aperture sonar (SAS), which is one of the most important factors degrading the SAS imaging quality. In the SAS imaging, the return signals are usually nonstationary due to the non-ideal hydrophone motion. In this paper, joint time-frequency analysis (JTFA), as a good technique for analyzing nonstationary signals, is used in the SAS imaging. Based on the JTFA of the sonar return signals, a novel SAS imaging algorithm is proposed. The algorithm is verified by simulation examples.

  9. Fiji - an Open Source platform for biological image analysis

    PubMed Central

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2013-01-01

    Fiji is a distribution of the popular Open Source software ImageJ focused on biological image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image processing algorithms. Fiji facilitates the transformation of novel algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities. PMID:22743772

  10. Parameter-Based Performance Analysis of Object-Based Image Analysis Using Aerial and Quikbird-2 Images

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz, M.

    2014-09-01

    Opening new possibilities for research, very high resolution (VHR) imagery acquired by recent commercial satellites and aerial systems requires advanced approaches and techniques that can handle large volume of data with high local variance. Delineation of land use/cover information from VHR images is a hot research topic in remote sensing. In recent years, object-based image analysis (OBIA) has become a popular solution for image analysis tasks as it considers shape, texture and content information associated with the image objects. The most important stage of OBIA is the image segmentation process applied prior to classification. Determination of optimal segmentation parameters is of crucial importance for the performance of the selected classifier. In this study, effectiveness and applicability of the segmentation method in relation to its parameters was analysed using two VHR images, an aerial photo and a Quickbird-2 image. Multi-resolution segmentation technique was employed with its optimal parameters of scale, shape and compactness that were defined after an extensive trail process on the data sets. Nearest neighbour classifier was applied on the segmented images, and then the accuracy assessment was applied. Results show that segmentation parameters have a direct effect on the classification accuracy, and low values of scale-shape combinations produce the highest classification accuracies. Also, compactness parameter was found to be having minimal effect on the construction of image objects, hence it can be set to a constant value in image classification.

  11. Texture analysis on MRI images of non-Hodgkin lymphoma.

    PubMed

    Harrison, L; Dastidar, P; Eskola, H; Järvenpää, R; Pertovaara, H; Luukkaala, T; Kellokumpu-Lehtinen, P-L; Soimakallio, S

    2008-04-01

    The aim here is to show that texture parameters of magnetic resonance imaging (MRI) data changes in lymphoma tissue during chemotherapy. Ten patients having non-Hodgkin lymphoma masses in the abdomen were imaged for chemotherapy response evaluation three consecutive times. The analysis was performed with MaZda texture analysis (TA) application. The best discrimination in lymphoma MRI texture was obtained within T2-weighted images between the pre-treatment and the second response evaluation stage. TA proved to be a promising quantitative means of representing lymphoma tissue changes during medication follow-up. PMID:18342845

  12. The Land Analysis System (LAS) for multispectral image processing

    USGS Publications Warehouse

    Wharton, S. W.; Lu, Y. C.; Quirk, Bruce K.; Oleson, Lyndon R.; Newcomer, J. A.; Irani, Frederick M.

    1988-01-01

    The Land Analysis System (LAS) is an interactive software system available in the public domain for the analysis, display, and management of multispectral and other digital image data. LAS provides over 240 applications functions and utilities, a flexible user interface, complete online and hard-copy documentation, extensive image-data file management, reformatting, conversion utilities, and high-level device independent access to image display hardware. The authors summarize the capabilities of the current release of LAS (version 4.0) and discuss plans for future development. Particular emphasis is given to the issue of system portability and the importance of removing and/or isolating hardware and software dependencies.

  13. (Hyper)-graphical models in biomedical image analysis.

    PubMed

    Paragios, Nikos; Ferrante, Enzo; Glocker, Ben; Komodakis, Nikos; Parisot, Sarah; Zacharaki, Evangelia I

    2016-10-01

    Computational vision, visual computing and biomedical image analysis have made tremendous progress over the past two decades. This is mostly due the development of efficient learning and inference algorithms which allow better and richer modeling of image and visual understanding tasks. Hyper-graph representations are among the most prominent tools to address such perception through the casting of perception as a graph optimization problem. In this paper, we briefly introduce the importance of such representations, discuss their strength and limitations, provide appropriate strategies for their inference and present their application to address a variety of problems in biomedical image analysis. PMID:27377331

  14. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  15. Pathology imaging informatics for quantitative analysis of whole-slide images

    PubMed Central

    Kothari, Sonal; Phan, John H; Stokes, Todd H; Wang, May D

    2013-01-01

    Objectives With the objective of bringing clinical decision support systems to reality, this article reviews histopathological whole-slide imaging informatics methods, associated challenges, and future research opportunities. Target audience This review targets pathologists and informaticians who have a limited understanding of the key aspects of whole-slide image (WSI) analysis and/or a limited knowledge of state-of-the-art technologies and analysis methods. Scope First, we discuss the importance of imaging informatics in pathology and highlight the challenges posed by histopathological WSI. Next, we provide a thorough review of current methods for: quality control of histopathological images; feature extraction that captures image properties at the pixel, object, and semantic levels; predictive modeling that utilizes image features for diagnostic or prognostic applications; and data and information visualization that explores WSI for de novo discovery. In addition, we highlight future research directions and discuss the impact of large public repositories of histopathological data, such as the Cancer Genome Atlas, on the field of pathology informatics. Following the review, we present a case study to illustrate a clinical decision support system that begins with quality control and ends with predictive modeling for several cancer endpoints. Currently, state-of-the-art software tools only provide limited image processing capabilities instead of complete data analysis for clinical decision-making. We aim to inspire researchers to conduct more research in pathology imaging informatics so that clinical decision support can become a reality. PMID:23959844

  16. Cloud based toolbox for image analysis, processing and reconstruction tasks.

    PubMed

    Bednarz, Tomasz; Wang, Dadong; Arzhaeva, Yulia; Lagerstrom, Ryan; Vallotton, Pascal; Burdett, Neil; Khassapov, Alex; Szul, Piotr; Chen, Shiping; Sun, Changming; Domanski, Luke; Thompson, Darren; Gureyev, Timur; Taylor, John A

    2015-01-01

    This chapter describes a novel way of carrying out image analysis, reconstruction and processing tasks using cloud based service provided on the Australian National eResearch Collaboration Tools and Resources (NeCTAR) infrastructure. The toolbox allows users free access to a wide range of useful blocks of functionalities (imaging functions) that can be connected together in workflows allowing creation of even more complex algorithms that can be re-run on different data sets, shared with others or additionally adjusted. The functions given are in the area of cellular imaging, advanced X-ray image analysis, computed tomography and 3D medical imaging and visualisation. The service is currently available on the website www.cloudimaging.net.au . PMID:25381109

  17. Segmented infrared image analysis for rotating machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Duan, Lixiang; Yao, Mingchao; Wang, Jinjiang; Bai, Tangbo; Zhang, Laibin

    2016-07-01

    As a noncontact and non-intrusive technique, infrared image analysis becomes promising for machinery defect diagnosis. However, the insignificant information and strong noise in infrared image limit its performance. To address this issue, this paper presents an image segmentation approach to enhance the feature extraction in infrared image analysis. A region selection criterion named dispersion degree is also formulated to discriminate fault representative regions from unrelated background information. Feature extraction and fusion methods are then applied to obtain features from selected regions for further diagnosis. Experimental studies on a rotor fault simulator demonstrate that the presented segmented feature enhancement approach outperforms the one from the original image using both Naïve Bayes classifier and support vector machine.

  18. Image analysis and compression: renewed focus on texture

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Zujovic, Jana; Neuhoff, David L.

    2010-01-01

    We argue that a key to further advances in the fields of image analysis and compression is a better understanding of texture. We review a number of applications that critically depend on texture analysis, including image and video compression, content-based retrieval, visual to tactile image conversion, and multimodal interfaces. We introduce the idea of "structurally lossless" compression of visual data that allows significant differences between the original and decoded images, which may be perceptible when they are viewed side-by-side, but do not affect the overall quality of the image. We then discuss the development of objective texture similarity metrics, which allow substantial point-by-point deviations between textures that according to human judgment are essentially identical.

  19. Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images

    PubMed Central

    Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz

    2016-01-01

    The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692

  20. Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images.

    PubMed

    Khansari, Maziyar M; O'Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz

    2016-07-01

    The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method's discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692

  1. Multispectral image analysis for algal biomass quantification.

    PubMed

    Murphy, Thomas E; Macon, Keith; Berberoglu, Halil

    2013-01-01

    This article reports a novel multispectral image processing technique for rapid, noninvasive quantification of biomass concentration in attached and suspended algae cultures. Monitoring the biomass concentration is critical for efficient production of biofuel feedstocks, food supplements, and bioactive chemicals. Particularly, noninvasive and rapid detection techniques can significantly aid in providing delay-free process control feedback in large-scale cultivation platforms. In this technique, three-band spectral images of Anabaena variabilis cultures were acquired and separated into their red, green, and blue components. A correlation between the magnitude of the green component and the areal biomass concentration was generated. The correlation predicted the biomass concentrations of independently prepared attached and suspended cultures with errors of 7 and 15%, respectively, and the effect of varying lighting conditions and background color were investigated. This method can provide necessary feedback for dilution and harvesting strategies to maximize photosynthetic conversion efficiency in large-scale operation. PMID:23554374

  2. Computerized microscopic image analysis of follicular lymphoma

    NASA Astrophysics Data System (ADS)

    Sertel, Olcay; Kong, Jun; Lozanski, Gerard; Catalyurek, Umit; Saltz, Joel H.; Gurcan, Metin N.

    2008-03-01

    Follicular Lymphoma (FL) is a cancer arising from the lymphatic system. Originating from follicle center B cells, FL is mainly comprised of centrocytes (usually middle-to-small sized cells) and centroblasts (relatively large malignant cells). According to the World Health Organization's recommendations, there are three histological grades of FL characterized by the number of centroblasts per high-power field (hpf) of area 0.159 mm2. In current practice, these cells are manually counted from ten representative fields of follicles after visual examination of hematoxylin and eosin (H&E) stained slides by pathologists. Several studies clearly demonstrate the poor reproducibility of this grading system with very low inter-reader agreement. In this study, we are developing a computerized system to assist pathologists with this process. A hybrid approach that combines information from several slides with different stains has been developed. Thus, follicles are first detected from digitized microscopy images with immunohistochemistry (IHC) stains, (i.e., CD10 and CD20). The average sensitivity and specificity of the follicle detection tested on 30 images at 2×, 4× and 8× magnifications are 85.5+/-9.8% and 92.5+/-4.0%, respectively. Since the centroblasts detection is carried out in the H&E-stained slides, the follicles in the IHC-stained images are mapped to H&E-stained counterparts. To evaluate the centroblast differentiation capabilities of the system, 11 hpf images have been marked by an experienced pathologist who identified 41 centroblast cells and 53 non-centroblast cells. A non-supervised clustering process differentiates the centroblast cells from noncentroblast cells, resulting in 92.68% sensitivity and 90.57% specificity.

  3. Measurement and analysis of image sensors

    NASA Astrophysics Data System (ADS)

    Vitek, Stanislav

    2005-06-01

    For astronomical applications is necessary to have high precision in sensing and processing the image data. In this time are used the large CCD sensors from the various reasons. For the replacement of CCD sensors with CMOS sensing devices is important to know transfer characteristics of used CCD sensors. In the special applications like the robotic telescopes (fully automatic, without human interactions) seems to be good using of specially designed smart sensors, which have integrated more functions and have more features than CCDs.

  4. Analysis of pregerminated barley using hyperspectral image analysis.

    PubMed

    Arngren, Morten; Hansen, Per Waaben; Eriksen, Birger; Larsen, Jan; Larsen, Rasmus

    2011-11-01

    Pregermination is one of many serious degradations to barley when used for malting. A pregerminated barley kernel can under certain conditions not regerminate and is reduced to animal feed of lower quality. Identifying pregermination at an early stage is therefore essential in order to segregate the barley kernels into low or high quality. Current standard methods to quantify pregerminated barley include visual approaches, e.g. to identify the root sprout, or using an embryo staining method, which use a time-consuming procedure. We present an approach using a near-infrared (NIR) hyperspectral imaging system in a mathematical modeling framework to identify pregerminated barley at an early stage of approximately 12 h of pregermination. Our model only assigns pregermination as the cause for a single kernel's lack of germination and is unable to identify dormancy, kernel damage etc. The analysis is based on more than 750 Rosalina barley kernels being pregerminated at 8 different durations between 0 and 60 h based on the BRF method. Regerminating the kernels reveals a grouping of the pregerminated kernels into three categories: normal, delayed and limited germination. Our model employs a supervised classification framework based on a set of extracted features insensitive to the kernel orientation. An out-of-sample classification error of 32% (CI(95%): 29-35%) is obtained for single kernels when grouped into the three categories, and an error of 3% (CI(95%): 0-15%) is achieved on a bulk kernel level. The model provides class probabilities for each kernel, which can assist in achieving homogeneous germination profiles. This research can further be developed to establish an automated and faster procedure as an alternative to the standard procedures for pregerminated barley. PMID:21932866

  5. Performance analysis for geometrical attack on digital image watermarking

    NASA Astrophysics Data System (ADS)

    Jayanthi, VE.; Rajamani, V.; Karthikayen, P.

    2011-11-01

    We present a technique for irreversible watermarking approach robust to affine transform attacks in camera, biomedical and satellite images stored in the form of monochrome bitmap images. The watermarking approach is based on image normalisation in which both watermark embedding and extraction are carried out with respect to an image normalised to meet a set of predefined moment criteria. The normalisation procedure is invariant to affine transform attacks. The result of watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. Here, direct-sequence code division multiple access approach is used to embed multibit text information in DCT and DWT transform domains. The proposed watermarking schemes are robust against various types of attacks such as Gaussian noise, shearing, scaling, rotation, flipping, affine transform, signal processing and JPEG compression. Performance analysis results are measured using image processing metrics.

  6. Multispectral/hyperspectral image enhancement for biological cell analysis

    SciTech Connect

    Nuffer, Lisa L.; Medvick, Patricia A.; Foote, Harlan P.; Solinsky, James C.

    2006-08-01

    The paper shows new techniques for analyzing cell images taken with a microscope using multiple filters to form a datacube of spectral image planes. Because of the many neighboring spectral samples, much of the datacube appears as redundant, similar tissue. The analysis is based on the nonGaussian statistics of the image data, allowing for remapping of the data into image components that are dissimilar, and hence isolate subtle, spatial object regions of interest in the tissues. This individual component image set can be recombined into a single RGB color image useful in real-time location of regions of interest. The algorithms are susceptible to parallelization using Field Programmable Gate Array hardware processing.

  7. Method for measuring anterior chamber volume by image analysis

    NASA Astrophysics Data System (ADS)

    Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli

    2007-12-01

    Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.

  8. Seismoelectric beamforming imaging: a sensitivity analysis

    NASA Astrophysics Data System (ADS)

    El Khoury, P.; Revil, A.; Sava, P.

    2015-06-01

    The electrical current density generated by the propagation of a seismic wave at the interface characterized by a drop in electrical, hydraulic or mechanical properties produces an electrical field of electrokinetic nature. This field can be measured remotely with a signal-to-noise ratio depending on the background noise and signal attenuation. The seismoelectric beamforming approach is an emerging imaging technique based on scanning a porous material using appropriately delayed seismic sources. The idea is to focus the hydromechanical energy on a regular spatial grid and measure the converted electric field remotely at each focus time. This method can be used to image heterogeneities with a high definition and to provide structural information to classical geophysical methods. A numerical experiment is performed to investigate the resolution of the seismoelectric beamforming approach with respect to the main wavelength of the seismic waves. The 2-D model consists of a fictitious water-filled bucket in which a cylindrical sandstone core sample is set up vertically. The hydrophones/seismic sources are located on a 50-cm diameter circle in the bucket and the seismic energy is focused on the grid points in order to scan the medium and determine the geometry of the porous plug using the output electric potential image. We observe that the resolution of the method is given by a density of eight scanning points per wavelength. Additional numerical tests were also performed to see the impact of a wrong velocity model upon the seismoelectric map displaying the heterogeneities of the material.

  9. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  10. A guide to human in vivo microcirculatory flow image analysis.

    PubMed

    Massey, Michael J; Shapiro, Nathan I

    2016-01-01

    Various noninvasive microscopic camera technologies have been used to visualize the sublingual microcirculation in patients. We describe a comprehensive approach to bedside in vivo sublingual microcirculation video image capture and analysis techniques in the human clinical setting. We present a user perspective and guide suitable for clinical researchers and developers interested in the capture and analysis of sublingual microcirculatory flow videos. We review basic differences in the cameras, optics, light sources, operation, and digital image capture. We describe common techniques for image acquisition and discuss aspects of video data management, including data transfer, metadata, and database design and utilization to facilitate the image analysis pipeline. We outline image analysis techniques and reporting including video preprocessing and image quality evaluation. Finally, we propose a framework for future directions in the field of microcirculatory flow videomicroscopy acquisition and analysis. Although automated scoring systems have not been sufficiently robust for widespread clinical or research use to date, we discuss promising innovations that are driving new development. PMID:26861691

  11. New approach to gallbladder ultrasonic images analysis and lesions recognition.

    PubMed

    Bodzioch, Sławomir; Ogiela, Marek R

    2009-03-01

    This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards detection of disease symptoms on processed images. First, in this paper, there is presented a new method of filtering gallbladder contours from USG images. A major stage in this filtration is to segment and section off areas occupied by the said organ. In most cases this procedure is based on filtration that plays a key role in the process of diagnosing pathological changes. Unfortunately ultrasound images present among the most troublesome methods of analysis owing to the echogenic inconsistency of structures under observation. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours. The algorithm is based on rank filtration, as well as on the analysis of histogram sections on tested organs. The second part concerns detecting lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. Usually the final stage is to make a diagnosis based on the detected symptoms. This last stage can be carried out through either dedicated expert systems or more classic pattern analysis approach like using rules to determine illness basing on detected symptoms. This paper discusses the pattern analysis algorithms for gallbladder image interpretation towards classification of the most frequent illness symptoms of this organ. PMID:19124224

  12. Towards a Quantitative OCT Image Analysis

    PubMed Central

    Garcia Garrido, Marina; Beck, Susanne C.; Mühlfriedel, Regine; Julien, Sylvie; Schraermeyer, Ulrich; Seeliger, Mathias W.

    2014-01-01

    Background Optical coherence tomography (OCT) is an invaluable diagnostic tool for the detection and follow-up of retinal pathology in patients and experimental disease models. However, as morphological structures and layering in health as well as their alterations in disease are complex, segmentation procedures have not yet reached a satisfactory level of performance. Therefore, raw images and qualitative data are commonly used in clinical and scientific reports. Here, we assess the value of OCT reflectivity profiles as a basis for a quantitative characterization of the retinal status in a cross-species comparative study. Methods Spectral-Domain Optical Coherence Tomography (OCT), confocal Scanning-La­ser Ophthalmoscopy (SLO), and Fluorescein Angiography (FA) were performed in mice (Mus musculus), gerbils (Gerbillus perpadillus), and cynomolgus monkeys (Macaca fascicularis) using the Heidelberg Engineering Spectralis system, and additional SLOs and FAs were obtained with the HRA I (same manufacturer). Reflectivity profiles were extracted from 8-bit greyscale OCT images using the ImageJ software package (http://rsb.info.nih.gov/ij/). Results Reflectivity profiles obtained from OCT scans of all three animal species correlated well with ex vivo histomorphometric data. Each of the retinal layers showed a typical pattern that varied in relative size and degree of reflectivity across species. In general, plexiform layers showed a higher level of reflectivity than nuclear layers. A comparison of reflectivity profiles from specialized retinal regions (e.g. visual streak in gerbils, fovea in non-human primates) with respective regions of human retina revealed multiple similarities. In a model of Retinitis Pigmentosa (RP), the value of reflectivity profiles for the follow-up of therapeutic interventions was demonstrated. Conclusions OCT reflectivity profiles provide a detailed, quantitative description of retinal layers and structures including specialized retinal regions

  13. Integrated wavelets for medical image analysis

    NASA Astrophysics Data System (ADS)

    Heinlein, Peter; Schneider, Wilfried

    2003-11-01

    Integrated wavelets are a new method for discretizing the continuous wavelet transform (CWT). Independent of the choice of discrete scale and orientation parameters they yield tight families of convolution operators. Thus these families can easily be adapted to specific problems. After presenting the fundamental ideas, we focus primarily on the construction of directional integrated wavelets and their application to medical images. We state an exact algorithm for implementing this transform and present applications from the field of digital mammography. The first application covers the enhancement of microcalcifications in digital mammograms. Further, we exploit the directional information provided by integrated wavelets for better separation of microcalcifications from similar structures.

  14. Imaging for dismantlement verification: information management and analysis algorithms

    SciTech Connect

    Seifert, Allen; Miller, Erin A.; Myjak, Mitchell J.; Robinson, Sean M.; Jarman, Kenneth D.; Misner, Alex C.; Pitts, W. Karl; Woodring, Mitchell L.

    2010-09-01

    The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute. However, this process must be performed with care. Computing the perimeter, area, and intensity of an object, for example, might reveal sensitive information relating to shape, size, and material composition. This paper presents three analysis algorithms that reduce full image information to non-sensitive feature information. Ultimately, the algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We evaluate the algorithms on both their technical performance in image analysis, and their application with and without an explicitly constructed information barrier. The underlying images can be highly detailed, since they are dynamically generated behind the information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography.

  15. Image analysis tools and emerging algorithms for expression proteomics

    PubMed Central

    English, Jane A.; Lisacek, Frederique; Morris, Jeffrey S.; Yang, Guang-Zhong; Dunn, Michael J.

    2012-01-01

    Since their origins in academic endeavours in the 1970s, computational analysis tools have matured into a number of established commercial packages that underpin research in expression proteomics. In this paper we describe the image analysis pipeline for the established 2-D Gel Electrophoresis (2-DE) technique of protein separation, and by first covering signal analysis for Mass Spectrometry (MS), we also explain the current image analysis workflow for the emerging high-throughput ‘shotgun’ proteomics platform of Liquid Chromatography coupled to MS (LC/MS). The bioinformatics challenges for both methods are illustrated and compared, whilst existing commercial and academic packages and their workflows are described from both a user’s and a technical perspective. Attention is given to the importance of sound statistical treatment of the resultant quantifications in the search for differential expression. Despite wide availability of proteomics software, a number of challenges have yet to be overcome regarding algorithm accuracy, objectivity and automation, generally due to deterministic spot-centric approaches that discard information early in the pipeline, propagating errors. We review recent advances in signal and image analysis algorithms in 2-DE, MS, LC/MS and Imaging MS. Particular attention is given to wavelet techniques, automated image-based alignment and differential analysis in 2-DE, Bayesian peak mixture models and functional mixed modelling in MS, and group-wise consensus alignment methods for LC/MS. PMID:21046614

  16. A TSVD Analysis of Microwave Inverse Scattering for Breast Imaging

    PubMed Central

    Shea, Jacob D.; Van Veen, Barry D.; Hagness, Susan C.

    2013-01-01

    A variety of methods have been applied to the inverse scattering problem for breast imaging at microwave frequencies. While many techniques have been leveraged toward a microwave imaging solution, they are all fundamentally dependent on the quality of the scattering data. Evaluating and optimizing the information contained in the data are, therefore, instrumental in understanding and achieving optimal performance from any particular imaging method. In this paper, a method of analysis is employed for the evaluation of the information contained in simulated scattering data from a known dielectric profile. The method estimates optimal imaging performance by mapping the data through the inverse of the scattering system. The inverse is computed by truncated singular-value decomposition of a system of scattering equations. The equations are made linear by use of the exact total fields in the imaging volume, which are available in the computational domain. The analysis is applied to anatomically realistic numerical breast phantoms. The utility of the method is demonstrated for a given imaging system through the analysis of various considerations in system design and problem formulation. The method offers an avenue for decoupling the problem of data selection from the problem of image formation from that data. PMID:22113770

  17. Ballistics projectile image analysis for firearm identification.

    PubMed

    Li, Dongguang

    2006-10-01

    This paper is based upon the observation that, when a bullet is fired, it creates characteristic markings on the cartridge case and projectile. From these markings, over 30 different features can be distinguished, which, in combination, produce a "fingerprint" for a firearm. By analyzing features within such a set of firearm fingerprints, it will be possible to identify not only the type and model of a firearm, but also each and every individual weapon just as effectively as human fingerprint identification. A new analytic system based on the fast Fourier transform for identifying projectile specimens by the line-scan imaging technique is proposed in this paper. This paper develops optical, photonic, and mechanical techniques to map the topography of the surfaces of forensic projectiles for the purpose of identification. Experiments discussed in this paper are performed on images acquired from 16 various weapons. Experimental results show that the proposed system can be used for firearm identification efficiently and precisely through digitizing and analyzing the fired projectiles specimens. PMID:17022254

  18. Analysis operator learning and its application to image reconstruction.

    PubMed

    Hawe, Simon; Kleinsteuber, Martin; Diepold, Klaus

    2013-06-01

    Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator. In this paper, we present an algorithm for learning an analysis operator from training images. Our method is based on l(p)-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints. Moreover, we compare our approach to state-of-the-art methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques. PMID:23412611

  19. The Spectral Image Processing System (SIPS) - Interactive visualization and analysis of imaging spectrometer data

    NASA Technical Reports Server (NTRS)

    Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.

    1993-01-01

    The Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, has developed a prototype interactive software system called the Spectral Image Processing System (SIPS) using IDL (the Interactive Data Language) on UNIX-based workstations. SIPS is designed to take advantage of the combination of high spectral resolution and spatial data presentation unique to imaging spectrometers. It streamlines analysis of these data by allowing scientists to rapidly interact with entire datasets. SIPS provides visualization tools for rapid exploratory analysis and numerical tools for quantitative modeling. The user interface is X-Windows-based, user friendly, and provides 'point and click' operation. SIPS is being used for multidisciplinary research concentrating on use of physically based analysis methods to enhance scientific results from imaging spectrometer data. The objective of this continuing effort is to develop operational techniques for quantitative analysis of imaging spectrometer data and to make them available to the scientific community prior to the launch of imaging spectrometer satellite systems such as the Earth Observing System (EOS) High Resolution Imaging Spectrometer (HIRIS).

  20. Subcellular chemical and morphological analysis by stimulated Raman scattering microscopy and image analysis techniques

    PubMed Central

    D’Arco, Annalisa; Brancati, Nadia; Ferrara, Maria Antonietta; Indolfi, Maurizio; Frucci, Maria; Sirleto, Luigi

    2016-01-01

    The visualization of heterogeneous morphology, segmentation and quantification of image features is a crucial point for nonlinear optics microscopy applications, spanning from imaging of living cells or tissues to biomedical diagnostic. In this paper, a methodology combining stimulated Raman scattering microscopy and image analysis technique is presented. The basic idea is to join the potential of vibrational contrast of stimulated Raman scattering and the strength of imaging analysis technique in order to delineate subcellular morphology with chemical specificity. Validation tests on label free imaging of polystyrene-beads and of adipocyte cells are reported and discussed. PMID:27231626

  1. Subcellular chemical and morphological analysis by stimulated Raman scattering microscopy and image analysis techniques.

    PubMed

    D'Arco, Annalisa; Brancati, Nadia; Ferrara, Maria Antonietta; Indolfi, Maurizio; Frucci, Maria; Sirleto, Luigi

    2016-05-01

    The visualization of heterogeneous morphology, segmentation and quantification of image features is a crucial point for nonlinear optics microscopy applications, spanning from imaging of living cells or tissues to biomedical diagnostic. In this paper, a methodology combining stimulated Raman scattering microscopy and image analysis technique is presented. The basic idea is to join the potential of vibrational contrast of stimulated Raman scattering and the strength of imaging analysis technique in order to delineate subcellular morphology with chemical specificity. Validation tests on label free imaging of polystyrene-beads and of adipocyte cells are reported and discussed. PMID:27231626

  2. A parallel solution for high resolution histological image analysis.

    PubMed

    Bueno, G; González, R; Déniz, O; García-Rojo, M; González-García, J; Fernández-Carrobles, M M; Vállez, N; Salido, J

    2012-10-01

    This paper describes a general methodology for developing parallel image processing algorithms based on message passing for high resolution images (on the order of several Gigabytes). These algorithms have been applied to histological images and must be executed on massively parallel processing architectures. Advances in new technologies for complete slide digitalization in pathology have been combined with developments in biomedical informatics. However, the efficient use of these digital slide systems is still a challenge. The image processing that these slides are subject to is still limited both in terms of data processed and processing methods. The work presented here focuses on the need to design and develop parallel image processing tools capable of obtaining and analyzing the entire gamut of information included in digital slides. Tools have been developed to assist pathologists in image analysis and diagnosis, and they cover low and high-level image processing methods applied to histological images. Code portability, reusability and scalability have been tested by using the following parallel computing architectures: distributed memory with massive parallel processors and two networks, INFINIBAND and Myrinet, composed of 17 and 1024 nodes respectively. The parallel framework proposed is flexible, high performance solution and it shows that the efficient processing of digital microscopic images is possible and may offer important benefits to pathology laboratories. PMID:22522064

  3. SNR analysis of 3D magnetic resonance tomosynthesis (MRT) imaging

    NASA Astrophysics Data System (ADS)

    Kim, Min-Oh; Kim, Dong-Hyun

    2012-03-01

    In conventional 3D Fourier transform (3DFT) MR imaging, signal-to-noise ratio (SNR) is governed by the well-known relationship of being proportional to the voxel size and square root of the imaging time. Here, we introduce an alternative 3D imaging approach, termed MRT (Magnetic Resonance Tomosynthesis), which can generate a set of tomographic MR images similar to multiple 2D projection images in x-ray. A multiple-oblique-view (MOV) pulse sequence is designed to acquire the tomography-like images used in tomosynthesis process and an iterative back-projection (IBP) reconstruction method is used to reconstruct 3D images. SNR analysis is performed and shows that resolution and SNR tradeoff is not governed as with typical 3DFT MR imaging case. The proposed method provides a higher SNR than the conventional 3D imaging method with a partial loss of slice-direction resolution. It is expected that this method can be useful for extremely low SNR cases.

  4. Vibration analysis using digital image processing for in vitro imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Zhonghua; Wang, Shaohong; Gonzalez, Carlos

    2011-09-01

    A non-invasive self-measurement method for analyzing vibrations within a biological imaging system is presented. This method utilizes the system's imaging sensor, digital image processing and a custom dot matrix calibration target for in-situ vibration measurements. By taking a series of images of the target within a fixed field of view and time interval, averaging the dot profiles in each image, the in-plane coherent spacing of each dot can be identified in both the horizontal and vertical directions. The incoherent movement in the pattern spacing caused by vibration is then resolved from each image. Accounting for the CMOS imager rolling shutter, vibrations are then measured with different sampling times for intra-frame and inter-frame, the former provides the frame time and the later the image sampling time. The power spectrum density (PSD) analysis is then performed using both measurements to provide the incoherent system displacements and identify potential vibration sources. The PSD plots provide descriptive statistics of the displacement distribution due to random vibration contents. This approach has been successful in identifying vibration sources and measuring vibration geometric moments in imaging systems.

  5. Towards large-scale histopathological image analysis: hashing-based image retrieval.

    PubMed

    Zhang, Xiaofan; Liu, Wei; Dundar, Murat; Badve, Sunil; Zhang, Shaoting

    2015-02-01

    Automatic analysis of histopathological images has been widely utilized leveraging computational image-processing methods and modern machine learning techniques. Both computer-aided diagnosis (CAD) and content-based image-retrieval (CBIR) systems have been successfully developed for diagnosis, disease detection, and decision support in this area. Recently, with the ever-increasing amount of annotated medical data, large-scale and data-driven methods have emerged to offer a promise of bridging the semantic gap between images and diagnostic information. In this paper, we focus on developing scalable image-retrieval techniques to cope intelligently with massive histopathological images. Specifically, we present a supervised kernel hashing technique which leverages a small amount of supervised information in learning to compress a 10 000-dimensional image feature vector into only tens of binary bits with the informative signatures preserved. These binary codes are then indexed into a hash table that enables real-time retrieval of images in a large database. Critically, the supervised information is employed to bridge the semantic gap between low-level image features and high-level diagnostic information. We build a scalable image-retrieval framework based on the supervised hashing technique and validate its performance on several thousand histopathological images acquired from breast microscopic tissues. Extensive evaluations are carried out in terms of image classification (i.e., benign versus actionable categorization) and retrieval tests. Our framework achieves about 88.1% classification accuracy as well as promising time efficiency. For example, the framework can execute around 800 queries in only 0.01 s, comparing favorably with other commonly used dimensionality reduction and feature selection methods. PMID:25314696

  6. Rapid enumeration of viable bacteria by image analysis

    NASA Technical Reports Server (NTRS)

    Singh, A.; Pyle, B. H.; McFeters, G. A.

    1989-01-01

    A direct viable counting method for enumerating viable bacteria was modified and made compatible with image analysis. A comparison was made between viable cell counts determined by the spread plate method and direct viable counts obtained using epifluorescence microscopy either manually or by automatic image analysis. Cultures of Escherichia coli, Salmonella typhimurium, Vibrio cholerae, Yersinia enterocolitica and Pseudomonas aeruginosa were incubated at 35 degrees C in a dilute nutrient medium containing nalidixic acid. Filtered samples were stained for epifluorescence microscopy and analysed manually as well as by image analysis. Cells enlarged after incubation were considered viable. The viable cell counts determined using image analysis were higher than those obtained by either the direct manual count of viable cells or spread plate methods. The volume of sample filtered or the number of cells in the original sample did not influence the efficiency of the method. However, the optimal concentration of nalidixic acid (2.5-20 micrograms ml-1) and length of incubation (4-8 h) varied with the culture tested. The results of this study showed that under optimal conditions, the modification of the direct viable count method in combination with image analysis microscopy provided an efficient and quantitative technique for counting viable bacteria in a short time.

  7. Automated Analysis of Dynamic Ca2+ Signals in Image Sequences

    PubMed Central

    Francis, Michael; Waldrup, Josh; Qian, Xun; Taylor, Mark S.

    2014-01-01

    Intracellular Ca2+ signals are commonly studied with fluorescent Ca2+ indicator dyes and microscopy techniques. However, quantitative analysis of Ca2+ imaging data is time consuming and subject to bias. Automated signal analysis algorithms based on region of interest (ROI) detection have been implemented for one-dimensional line scan measurements, but there is no current algorithm which integrates optimized identification and analysis of ROIs in two-dimensional image sequences. Here an algorithm for rapid acquisition and analysis of ROIs in image sequences is described. It utilizes ellipses fit to noise filtered signals in order to determine optimal ROI placement, and computes Ca2+ signal parameters of amplitude, duration and spatial spread. This algorithm was implemented as a freely available plugin for ImageJ (NIH) software. Together with analysis scripts written for the open source statistical processing software R, this approach provides a high-capacity pipeline for performing quick statistical analysis of experimental output. The authors suggest that use of this analysis protocol will lead to a more complete and unbiased characterization of physiologic Ca2+ signaling. PMID:24962784

  8. Police witness identification images: a geometric morphometric analysis.

    PubMed

    Hayes, Susan; Tullberg, Cameron

    2012-11-01

    Research into witness identification images typically occurs within the laboratory and involves subjective likeness and recognizability judgments. This study analyzed whether actual witness identification images systematically alter the facial shapes of the suspects described. The shape analysis tool, geometric morphometrics, was applied to 46 homologous facial landmarks displayed on 50 witness identification images and their corresponding arrest photographs, using principal component analysis and multivariate regressions. The results indicate that compared with arrest photographs, witness identification images systematically depict suspects with lowered and medially located eyebrows (p = <0.000001). This was found to occur independently of the Police Artist, and did not occur with composites produced under laboratory conditions. There are several possible explanations for this finding, including any, or all, of the following: The suspect was frowning at the time of the incident, the witness had negative feelings toward the suspect, this is an effect of unfamiliar face processing, the suspect displayed fear at the time of their arrest photograph. PMID:22536846

  9. Validating retinal fundus image analysis algorithms: issues and a proposal.

    PubMed

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; Al-Diri, Bashir; Cheung, Carol Y; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M; Jelinek, Herbert F; Meriaudeau, Fabrice; Quellec, Gwénolé; Macgillivray, Tom; Dhillon, Bal

    2013-05-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433

  10. A hyperspectral image analysis workbench for environmental science applications

    SciTech Connect

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-10-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or ``hyperspectral`` imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne`s Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image ``texture spectra`` derived from fractal signatures computed for subimage tiles at each wavelength.

  11. A hyperspectral image analysis workbench for environmental science applications

    SciTech Connect

    Christiansen, J.H.; Zawada, D.G.; Simunich, K.L.; Slater, J.C.

    1992-01-01

    A significant challenge to the information sciences is to provide more powerful and accessible means to exploit the enormous wealth of data available from high-resolution imaging spectrometry, or hyperspectral'' imagery, for analysis, for mapping purposes, and for input to environmental modeling applications. As an initial response to this challenge, Argonne's Advanced Computer Applications Center has developed a workstation-based prototype software workbench which employs Al techniques and other advanced approaches to deduce surface characteristics and extract features from the hyperspectral images. Among its current capabilities, the prototype system can classify pixels by abstract surface type. The classification process employs neural network analysis of inputs which include pixel spectra and a variety of processed image metrics, including image texture spectra'' derived from fractal signatures computed for subimage tiles at each wavelength.

  12. Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal

    PubMed Central

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; al-Diri, Bashir; Cheung, Carol Y.; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M.; Jelinek, Herbert F.; Meriaudeau, Fabrice; Quellec, Gwénolé; MacGillivray, Tom; Dhillon, Bal

    2013-01-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433

  13. Classification of Korla fragrant pears using NIR hyperspectral imaging analysis

    NASA Astrophysics Data System (ADS)

    Rao, Xiuqin; Yang, Chun-Chieh; Ying, Yibin; Kim, Moon S.; Chao, Kuanglin

    2012-05-01

    Korla fragrant pears are small oval pears characterized by light green skin, crisp texture, and a pleasant perfume for which they are named. Anatomically, the calyx of a fragrant pear may be either persistent or deciduous; the deciduouscalyx fruits are considered more desirable due to taste and texture attributes. Chinese packaging standards require that packed cases of fragrant pears contain 5% or less of the persistent-calyx type. Near-infrared hyperspectral imaging was investigated as a potential means for automated sorting of pears according to calyx type. Hyperspectral images spanning the 992-1681 nm region were acquired using an EMCCD-based laboratory line-scan imaging system. Analysis of the hyperspectral images was performed to select wavebands useful for identifying persistent-calyx fruits and for identifying deciduous-calyx fruits. Based on the selected wavebands, an image-processing algorithm was developed that targets automated classification of Korla fragrant pears into the two categories for packaging purposes.

  14. Image analysis of ocular fundus for retinopathy characterization

    SciTech Connect

    Ushizima, Daniela; Cuadros, Jorge

    2010-02-05

    Automated analysis of ocular fundus images is a common procedure in countries as England, including both nonemergency examination and retinal screening of patients with diabetes mellitus. This involves digital image capture and transmission of the images to a digital reading center for evaluation and treatment referral. In collaboration with the Optometry Department, University of California, Berkeley, we have tested computer vision algorithms to segment vessels and lesions in ground-truth data (DRIVE database) and hundreds of images of non-macular centric and nonuniform illumination views of the eye fundus from EyePACS program. Methods under investigation involve mathematical morphology (Figure 1) for image enhancement and pattern matching. Recently, we have focused in more efficient techniques to model the ocular fundus vasculature (Figure 2), using deformable contours. Preliminary results show accurate segmentation of vessels and high level of true-positive microaneurysms.

  15. Peripheral blood smear image analysis: A comprehensive review.

    PubMed

    Mohammed, Emad A; Mohamed, Mostafa M A; Far, Behrouz H; Naugler, Christopher

    2014-01-01

    Peripheral blood smear image examination is a part of the routine work of every laboratory. The manual examination of these images is tedious, time-consuming and suffers from interobserver variation. This has motivated researchers to develop different algorithms and methods to automate peripheral blood smear image analysis. Image analysis itself consists of a sequence of steps consisting of image segmentation, features extraction and selection and pattern classification. The image segmentation step addresses the problem of extraction of the object or region of interest from the complicated peripheral blood smear image. Support vector machine (SVM) and artificial neural networks (ANNs) are two common approaches to image segmentation. Features extraction and selection aims to derive descriptive characteristics of the extracted object, which are similar within the same object class and different between different objects. This will facilitate the last step of the image analysis process: pattern classification. The goal of pattern classification is to assign a class to the selected features from a group of known classes. There are two types of classifier learning algorithms: supervised and unsupervised. Supervised learning algorithms predict the class of the object under test using training data of known classes. The training data have a predefined label for every class and the learning algorithm can utilize this data to predict the class of a test object. Unsupervised learning algorithms use unlabeled training data and divide them into groups using similarity measurements. Unsupervised learning algorithms predict the group to which a new test object belong to, based on the training data without giving an explicit class to that object. ANN, SVM, decision tree and K-nearest neighbor are possible approaches to classification algorithms. Increased discrimination may be obtained by combining several classifiers together. PMID:24843821

  16. Peripheral blood smear image analysis: A comprehensive review

    PubMed Central

    Mohammed, Emad A.; Mohamed, Mostafa M. A.; Far, Behrouz H.; Naugler, Christopher

    2014-01-01

    Peripheral blood smear image examination is a part of the routine work of every laboratory. The manual examination of these images is tedious, time-consuming and suffers from interobserver variation. This has motivated researchers to develop different algorithms and methods to automate peripheral blood smear image analysis. Image analysis itself consists of a sequence of steps consisting of image segmentation, features extraction and selection and pattern classification. The image segmentation step addresses the problem of extraction of the object or region of interest from the complicated peripheral blood smear image. Support vector machine (SVM) and artificial neural networks (ANNs) are two common approaches to image segmentation. Features extraction and selection aims to derive descriptive characteristics of the extracted object, which are similar within the same object class and different between different objects. This will facilitate the last step of the image analysis process: pattern classification. The goal of pattern classification is to assign a class to the selected features from a group of known classes. There are two types of classifier learning algorithms: supervised and unsupervised. Supervised learning algorithms predict the class of the object under test using training data of known classes. The training data have a predefined label for every class and the learning algorithm can utilize this data to predict the class of a test object. Unsupervised learning algorithms use unlabeled training data and divide them into groups using similarity measurements. Unsupervised learning algorithms predict the group to which a new test object belong to, based on the training data without giving an explicit class to that object. ANN, SVM, decision tree and K-nearest neighbor are possible approaches to classification algorithms. Increased discrimination may be obtained by combining several classifiers together. PMID:24843821

  17. MIXING QUANTIFICATION BY VISUAL IMAGING ANALYSIS

    EPA Science Inventory

    The paper reports on development of a method for quantifying two measures of mixing, the scale and intensity of segregation, through flow visualization, video recording, and software analysis. his non-intrusive method analyzes a planar cross section of a flowing system from an in...

  18. MIXING QUANTIFICATION BY VISUAL IMAGING ANALYSIS

    EPA Science Inventory

    This paper reports on development of a method for quantifying two measures of mixing, the scale and intensity of segregation, through flow visualization, video recording, and software analysis. This non-intrusive method analyzes a planar cross section of a flowing system from an ...

  19. Computer vision algorithms in DNA ploidy image analysis

    NASA Astrophysics Data System (ADS)

    Alexandratou, Eleni; Sofou, Anastasia; Papasaika, Haris; Maragos, Petros; Yova, Dido; Kavantzas, Nikolaos

    2006-02-01

    The high incidence and mortality rates of prostate cancer have stimulated research for prevention, early diagnosis and appropriate treatment. DNA ploidy status of tumour cells is an important parameter with diagnostic and prognostic significance. In the current study, DNA ploidy analysis was performed using image cytometry technique and digital image processing and analysis. Tissue samples from prostate patients were stained using the Feulgen method. Images were acquired using a digital imaging microscopy system consisting of an Olympus BX-50 microscope equipped with a color CCD camera. Segmentation of such images is not a trivial problem because of the uneven background, intensity variations within the nuclei and cell clustering. In this study specific algorithms were developed in Matlab based on the most prominent image segmentation approaches that emanate from the field of Mathematical Morphology, focusing on region-based watershed segmentation. First biomedical images were simplified under non-linear filtering (alternate sequential filters, levelings), and next image features such as gradient information and markers were extracted so as to lead the segmentation process. The extracted markers are used as seeds; watershed transformation was performed to the gradient of the filtered image. Image flooding was performed isotropically from the markers using hierarchical queues based on Beucher and Meyer methodology. The developed algorithms have successfully segmented the cell from its background and from cells clusters as well. To characterize the nuclei, we attempt to derive a set of effective color features. By analyzing more than 50 color features, we have found that a set of color features, hue, saturation-weighted hue, I I=(R+G+B)/3, I II=(R-B),I 3=(2G-R-B)/2, Karhunen-Loeve transformation and energy operator, are effective.

  20. An advanced image analysis tool for the quantification and characterization of breast cancer in microscopy images.

    PubMed

    Goudas, Theodosios; Maglogiannis, Ilias

    2015-03-01

    The paper presents an advanced image analysis tool for the accurate and fast characterization and quantification of cancer and apoptotic cells in microscopy images. The proposed tool utilizes adaptive thresholding and a Support Vector Machines classifier. The segmentation results are enhanced through a Majority Voting and a Watershed technique, while an object labeling algorithm has been developed for the fast and accurate validation of the recognized cells. Expert pathologists evaluated the tool and the reported results are satisfying and reproducible. PMID:25681102

  1. Theoretical analysis of quantum ghost imaging through turbulence

    SciTech Connect

    Chan, Kam Wai Clifford; Simon, D. S.; Sergienko, A. V.; Hardy, Nicholas D.; Shapiro, Jeffrey H.; Dixon, P. Ben; Howland, Gregory A.; Howell, John C.; Eberly, Joseph H.; O'Sullivan, Malcolm N.; Rodenburg, Brandon; Boyd, Robert W.

    2011-10-15

    Atmospheric turbulence generally affects the resolution and visibility of an image in long-distance imaging. In a recent quantum ghost imaging experiment [P. B. Dixon et al., Phys. Rev. A 83, 051803 (2011)], it was found that the effect of the turbulence can nevertheless be mitigated under certain conditions. This paper gives a detailed theoretical analysis to the setup and results reported in the experiment. Entangled photons with a finite correlation area and a turbulence model beyond the phase screen approximation are considered.

  2. Fundus image change analysis: geometric and radiometric normalization

    NASA Astrophysics Data System (ADS)

    Shin, David S.; Kaiser, Richard S.; Lee, Michael S.; Berger, Jeffrey W.

    1999-06-01

    Image change analysis will potentiate fundus feature quantitation in natural history and intervention studies for major blinding diseases such as age-related macular degeneration and diabetic retinopathy. Geometric and radiometric normalization of fundus images acquired at two points in time are required for accurate change detection, but existing methods are unsatisfactory for change analysis. We have developed and explored algorithms for correction of image misalignment (geometric) and inter- and intra-image brightness variation (radiometric) in order to facilitate highly accurate change detection. Thirty-five millimeter color fundus photographs were digitized at 500 to 1000 dpi. Custom-developed registration algorithms correcting for translation only; translation and rotation; translation, rotation, and scale; and polynomial based image-warping algorithms allowed for exploration of registration accuracy required for change detection. Registration accuracy beyond that offered by rigid body transformation is required for accurate change detection. Radiometric correction required shade-correction and normalization of inter-image statistical parameters. Precise geometric and radiometric normalization allows for highly accurate change detection. To our knowledge, these results are the first demonstration of the combination of geometric and radiometric normalization offering sufficient accuracy to allow for accurate fundus image change detection potentiating longitudinal study of retinal disease.

  3. Proceedings of the Airborne Imaging Spectrometer Data Analysis Workshop

    NASA Technical Reports Server (NTRS)

    Vane, G. (Editor); Goetz, A. F. H. (Editor)

    1985-01-01

    The Airborne Imaging Spectrometer (AIS) Data Analysis Workshop was held at the Jet Propulsion Laboratory on April 8 to 10, 1985. It was attended by 92 people who heard reports on 30 investigations currently under way using AIS data that have been collected over the past two years. Written summaries of 27 of the presentations are in these Proceedings. Many of the results presented at the Workshop are preliminary because most investigators have been working with this fundamentally new type of data for only a relatively short time. Nevertheless, several conclusions can be drawn from the Workshop presentations concerning the value of imaging spectrometry to Earth remote sensing. First, work with AIS has shown that direct identification of minerals through high spectral resolution imaging is a reality for a wide range of materials and geological settings. Second, there are strong indications that high spectral resolution remote sensing will enhance the ability to map vegetation species. There are also good indications that imaging spectrometry will be useful for biochemical studies of vegetation. Finally, there are a number of new data analysis techniques under development which should lead to more efficient and complete information extraction from imaging spectrometer data. The results of the Workshop indicate that as experience is gained with this new class of data, and as new analysis methodologies are developed and applied, the value of imaging spectrometry should increase.

  4. Image-based histologic grade estimation using stochastic geometry analysis

    NASA Astrophysics Data System (ADS)

    Petushi, Sokol; Zhang, Jasper; Milutinovic, Aladin; Breen, David E.; Garcia, Fernando U.

    2011-03-01

    Background: Low reproducibility of histologic grading of breast carcinoma due to its subjectivity has traditionally diminished the prognostic value of histologic breast cancer grading. The objective of this study is to assess the effectiveness and reproducibility of grading breast carcinomas with automated computer-based image processing that utilizes stochastic geometry shape analysis. Methods: We used histology images stained with Hematoxylin & Eosin (H&E) from invasive mammary carcinoma, no special type cases as a source domain and study environment. We developed a customized hybrid semi-automated segmentation algorithm to cluster the raw image data and reduce the image domain complexity to a binary representation with the foreground representing regions of high density of malignant cells. A second algorithm was developed to apply stochastic geometry and texture analysis measurements to the segmented images and to produce shape distributions, transforming the original color images into a histogram representation that captures their distinguishing properties between various histological grades. Results: Computational results were compared against known histological grades assigned by the pathologist. The Earth Mover's Distance (EMD) similarity metric and the K-Nearest Neighbors (KNN) classification algorithm provided correlations between the high-dimensional set of shape distributions and a priori known histological grades. Conclusion: Computational pattern analysis of histology shows promise as an effective software tool in breast cancer histological grading.

  5. SIMA: Python software for analysis of dynamic fluorescence imaging data

    PubMed Central

    Kaifosh, Patrick; Zaremba, Jeffrey D.; Danielson, Nathan B.; Losonczy, Attila

    2014-01-01

    Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/. PMID:25295002

  6. Automatic quantitative analysis of cardiac MR perfusion images

    NASA Astrophysics Data System (ADS)

    Breeuwer, Marcel M.; Spreeuwers, Luuk J.; Quist, Marcel J.

    2001-07-01

    Magnetic Resonance Imaging (MRI) is a powerful technique for imaging cardiovascular diseases. The introduction of cardiovascular MRI into clinical practice is however hampered by the lack of efficient and accurate image analysis methods. This paper focuses on the evaluation of blood perfusion in the myocardium (the heart muscle) from MR images, using contrast-enhanced ECG-triggered MRI. We have developed an automatic quantitative analysis method, which works as follows. First, image registration is used to compensate for translation and rotation of the myocardium over time. Next, the boundaries of the myocardium are detected and for each position within the myocardium a time-intensity profile is constructed. The time interval during which the contrast agent passes for the first time through the left ventricle and the myocardium is detected and various parameters are measured from the time-intensity profiles in this interval. The measured parameters are visualized as color overlays on the original images. Analysis results are stored, so that they can later on be compared for different stress levels of the heart. The method is described in detail in this paper and preliminary validation results are presented.

  7. Digital interactive image analysis by array processing

    NASA Technical Reports Server (NTRS)

    Sabels, B. E.; Jennings, J. D.

    1973-01-01

    An attempt is made to draw a parallel between the existing geophysical data processing service industries and the emerging earth resources data support requirements. The relationship of seismic data analysis to ERTS data analysis is natural because in either case data is digitally recorded in the same format, resulting from remotely sensed energy which has been reflected, attenuated, shifted and degraded on its path from the source to the receiver. In the seismic case the energy is acoustic, ranging in frequencies from 10 to 75 cps, for which the lithosphere appears semi-transparent. In earth survey remote sensing through the atmosphere, visible and infrared frequency bands are being used. Yet the hardware and software required to process the magnetically recorded data from the two realms of inquiry are identical and similar, respectively. The resulting data products are similar.

  8. Image analysis of chest radiographs. Final report

    SciTech Connect

    Hankinson, J.L.

    1982-06-01

    The report demonstrates the feasibility of using a computer for automated interpretation of chest radiographs for pneumoconiosis. The primary goal of this project was to continue testing and evaluating the prototype system with a larger set of films. After review of the final contract report and a review of the current literature, it was clear that several modifications to the prototype system were needed before the project could continue. These modifications can be divided into two general areas. The first area was in improving the stability of the system and compensating for the diversity of film quality which exists in films obtained in a surveillance program. Since the system was to be tested with a large number of films, it was impractical to be extremely selective of film quality. The second area is in terms of processing time. With a large set of films, total processing time becomes much more significant. An image display was added to the system so that the computer determined lung boundaries could be verified for each film. A film handling system was also added, enabling the system to scan films continuously without attendance.

  9. Image analysis of optic nerve disease.

    PubMed

    Burgoyne, C F

    2004-11-01

    Existing methodologies for imaging the optic nerve head surface topography and measuring the retinal nerve fibre layer thickness include confocal scanning laser ophthalmoscopy (Heidelberg retinal tomograph), optical coherence tomography, and scanning laser polarimetry. For cross-sectional screening of patient populations, all three approaches have achieved sensitivities and specificities within the 60-80th percentile in various studies, with occasional specificities greater than 90% in select populations. Nevertheless, these methods are not likely to provide useful assistance for the experienced examiner at their present level of performance. For longitudinal change detection in individual patients, strategies for clinically specific change detection have been rigorously evaluated for confocal scanning laser tomography only. While these initial studies are encouraging, applying these algorithms in larger numbers of patients is now necessary. Future directions for these technologies are likely to include ultra-high resolution optical coherence tomography, the use of neural network/machine learning classifiers to improve clinical decision-making, and the ability to evaluate the susceptibility of individual optic nerve heads to potential damage from a given level of intraocular pressure or systemic blood pressure. PMID:15534606

  10. Image Analysis of DNA Fiber and Nucleus in Plants.

    PubMed

    Ohmido, Nobuko; Wako, Toshiyuki; Kato, Seiji; Fukui, Kiichi

    2016-01-01

    Advances in cytology have led to the application of a wide range of visualization methods in plant genome studies. Image analysis methods are indispensable tools where morphology, density, and color play important roles in the biological systems. Visualization and image analysis methods are useful techniques in the analyses of the detailed structure and function of extended DNA fibers (EDFs) and interphase nuclei. The EDF is the highest in the spatial resolving power to reveal genome structure and it can be used for physical mapping, especially for closely located genes and tandemly repeated sequences. One the other hand, analyzing nuclear DNA and proteins would reveal nuclear structure and functions. In this chapter, we describe the image analysis protocol for quantitatively analyzing different types of plant genome, EDFs and interphase nuclei. PMID:27557694

  11. Imaging for dismantlement verification: information management and analysis algorithms

    SciTech Connect

    Robinson, Sean M.; Jarman, Kenneth D.; Pitts, W. Karl; Seifert, Allen; Misner, Alex C.; Woodring, Mitchell L.; Myjak, Mitchell J.

    2012-01-11

    The level of detail discernible in imaging techniques has generally excluded them from consideration as verification tools in inspection regimes. An image will almost certainly contain highly sensitive information, and storing a comparison image will almost certainly violate a cardinal principle of information barriers: that no sensitive information be stored in the system. To overcome this problem, some features of the image might be reduced to a few parameters suitable for definition as an attribute, which must be non-sensitive to be acceptable in an Information Barrier regime. However, this process must be performed with care. Features like the perimeter, area, and intensity of an object, for example, might reveal sensitive information. Any data-reduction technique must provide sufficient information to discriminate a real object from a spoofed or incorrect one, while avoiding disclosure (or storage) of any sensitive object qualities. Ultimately, algorithms are intended to provide only a yes/no response verifying the presence of features in the image. We discuss the utility of imaging for arms control applications and present three image-based verification algorithms in this context. The algorithms reduce full image information to non-sensitive feature information, in a process that is intended to enable verification while eliminating the possibility of image reconstruction. The underlying images can be highly detailed, since they are dynamically generated behind an information barrier. We consider the use of active (conventional) radiography alone and in tandem with passive (auto) radiography. We study these algorithms in terms of technical performance in image analysis and application to an information barrier scheme.

  12. Functional imaging of auditory scene analysis.

    PubMed

    Gutschalk, Alexander; Dykstra, Andrew R

    2014-01-01

    Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging. PMID:23968821

  13. Perfect imaging analysis of the spherical geodesic waveguide

    NASA Astrophysics Data System (ADS)

    González, Juan C.; Benítez, Pablo; Miñano, Juan C.; Grabovičkić, Dejan

    2012-12-01

    Negative Refractive Lens (NRL) has shown that an optical system can produce images with details below the classic Abbe diffraction limit. This optical system transmits the electromagnetic fields, emitted by an object plane, towards an image plane producing the same field distribution in both planes. In particular, a Dirac delta electric field in the object plane is focused without diffraction limit to the Dirac delta electric field in the image plane. Two devices with positive refraction, the Maxwell Fish Eye lens (MFE) and the Spherical Geodesic Waveguide (SGW) have been claimed to break the diffraction limit using positive refraction with a different meaning. In these cases, it has been considered the power transmission from a point source to a point receptor, which falls drastically when the receptor is displaced from the focus by a distance much smaller than the wavelength. Although these systems can detect displacements up to λ/3000, they cannot be compared to the NRL, since the concept of image is different. The SGW deals only with point source and drain, while in the case of the NRL, there is an object and an image surface. Here, it is presented an analysis of the SGW with defined object and image surfaces (both are conical surfaces), similarly as in the case of the NRL. The results show that a Dirac delta electric field on the object surface produces an image below the diffraction limit on the image surface.

  14. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  15. A unified noise analysis for iterative image estimation.

    PubMed

    Qi, Jinyi

    2003-11-01

    Iterative image estimation methods have been widely used in emission tomography. Accurate estimation of the uncertainty of the reconstructed images is essential for quantitative applications. While both iteration-based noise analysis and fixed-point noise analysis have been developed, current iteration-based results are limited to only a few algorithms that have an explicit multiplicative update equation and some may not converge to the fixed-point result. This paper presents a theoretical noise analysis that is applicable to a wide range of preconditioned gradient-type algorithms. Under a certain condition, the proposed method does not require an explicit expression of the preconditioner. By deriving the fixed-point expression from the iteration-based result, we show that the proposed iteration-based noise analysis is consistent with fixed-point analysis. Examples in emission tomography and transmission tomography are shown. The results are validated using Monte Carlo simulations. PMID:14653559

  16. Region-based Statistical Analysis of 2D PAGE Images

    PubMed Central

    Li, Feng; Seillier-Moiseiwitsch, Françoise; Korostyshevskiy, Valeriy R.

    2011-01-01

    A new comprehensive procedure for statistical analysis of two-dimensional polyacrylamide gel electrophoresis (2D PAGE) images is proposed, including protein region quantification, normalization and statistical analysis. Protein regions are defined by the master watershed map that is obtained from the mean gel. By working with these protein regions, the approach bypasses the current bottleneck in the analysis of 2D PAGE images: it does not require spot matching. Background correction is implemented in each protein region by local segmentation. Two-dimensional locally weighted smoothing (LOESS) is proposed to remove any systematic bias after quantification of protein regions. Proteins are separated into mutually independent sets based on detected correlations, and a multivariate analysis is used on each set to detect the group effect. A strategy for multiple hypothesis testing based on this multivariate approach combined with the usual Benjamini-Hochberg FDR procedure is formulated and applied to the differential analysis of 2D PAGE images. Each step in the analytical protocol is shown by using an actual dataset. The effectiveness of the proposed methodology is shown using simulated gels in comparison with the commercial software packages PDQuest and Dymension. We also introduce a new procedure for simulating gel images. PMID:21850152

  17. Development of an image-analysis light-scattering technique

    NASA Astrophysics Data System (ADS)

    Algarni, Saad; Kashuri, Hektor; Iannacchione, Germano

    2013-03-01

    We describe the progress in developing a versatile image-analysis approach for a light-scattering experiment. Recent advances in image analysis algorithms, computational power, and CCD image capture has allowed for the complete digital recording of the scattering of coherent laser light by a wide variety of samples. This digital record can then yield both static and dynamic information about the scattering events. Our approach is described using a very simple and in-expensive experimental arrangement for liquid samples. Calibration experiments were performed on aqueous suspensions of latex spheres having 0.5 and 1.0 micrometer diameter for three concentrations of 2 X 10-6, 1 X 10-6, and 5 X 10-7 % w/w at room temperature. The resulting data span a wave-vector range of q = 102 to 105 cm-1 and time averages over 0.05 to 1200 sec. The static analysis yield particle sizes in good agreement with expectations and a simple dynamic analysis yields an estimate of the characteristic time scale of the particle dynamics. Further developments in image corrections (laser stability, vibration, curvature, etc.) as well as time auto-correlation analysis will also be discussed.

  18. Photoacoustic Image Analysis for Cancer Detection and Building a Novel Ultrasound Imaging System

    NASA Astrophysics Data System (ADS)

    Sinha, Saugata

    Photoacoustic (PA) imaging is a rapidly emerging non-invasive soft tissue imaging modality which has the potential to detect tissue abnormality at early stage. Photoacoustic images map the spatially varying optical absorption property of tissue. In multiwavelength photoacoustic imaging, the soft tissue is imaged with different wavelengths, tuned to the absorption peaks of the specific light absorbing tissue constituents or chromophores to obtain images with different contrasts of the same tissue sample. From those images, spatially varying concentration of the chromophores can be recovered. As multiwavelength PA images can provide important physiological information related to function and molecular composition of the tissue, so they can be used for diagnosis of cancer lesions and differentiation of malignant tumors from benign tumors. In this research, a number of parameters have been extracted from multiwavelength 3D PA images of freshly excised human prostate and thyroid specimens, imaged at five different wavelengths. Using marked histology slides as ground truths, region of interests (ROI) corresponding to cancer, benign and normal regions have been identified in the PA images. The extracted parameters belong to different categories namely chromophore concentration, frequency parameters and PA image pixels and they represent different physiological and optical properties of the tissue specimens. Statistical analysis has been performed to test whether the extracted parameters are significantly different between cancer, benign and normal regions. A multidimensional [29 dimensional] feature set, built with the extracted parameters from the 3D PA images, has been divided randomly into training and testing sets. The training set has been used to train support vector machine (SVM) and neural network (NN) classifiers while the performance of the classifiers in differentiating different tissue pathologies have been determined by the testing dataset. Using the NN

  19. Kepler mission exoplanet transit data analysis using fractal imaging

    NASA Astrophysics Data System (ADS)

    Dehipawala, S.; Tremberger, G.; Majid, Y.; Holden, T.; Lieberman, D.; Cheung, T.

    2012-10-01

    The Kepler mission is designed to survey a fist-sized patch of the sky within the Milky Way galaxy for the discovery of exoplanets, with emphasis on near Earth-size exoplanets in or near the habitable zone. The Kepler space telescope would detect the brightness fluctuation of a host star and extract periodic dimming in the lightcurve caused by exoplanets that cross in front of their host star. The photometric data of a host star could be interpreted as an image where fractal imaging would be applicable. Fractal analysis could elucidate the incomplete data limitation posed by the data integration window. The fractal dimension difference between the lower and upper halves of the image could be used to identify anomalies associated with transits and stellar activity as the buried signals are expected to be in the lower half of such an image. Using an image fractal dimension resolution of 0.04 and defining the whole image fractal dimension as the Chi-square expected value of the fractal dimension, a p-value can be computed and used to establish a numerical threshold for decision making that may be useful in further studies of lightcurves of stars with candidate exoplanets. Similar fractal dimension difference approaches would be applicable to the study of photometric time series data via the Higuchi method. The correlated randomness of the brightness data series could be used to support inferences based on image fractal dimension differences. Fractal compression techniques could be used to transform a lightcurve image, resulting in a new image with a new fractal dimension value, but this method has been found to be ineffective for images with high information capacity. The three studied criteria could be used together to further constrain the Kepler list of candidate lightcurves of stars with possible exoplanets that may be planned for ground-based telescope confirmation.

  20. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1983-01-01

    Analysis during the quarter was carried out on geometric, radiometric, and information content aspects of both MSS and thematic mapper (TM) data. Test sites in Webster County, Iowa and Chicago, IL., and near Joliet, IL were studied. Band to band registration was evaluated and TM Bands 5 and 7 were found to be approximately 0.5 pixel out of registration with 1,2,3,4, and the thermal was found to be misregistered by 4 30 m pixels to the east and 1 pixel south. Certain MSS bands indicated nominally .25 pixel misregistration. Radiometrically, some striping was observed in TM bands and significant oscillatory noise patterns exist in MSS data which is possibly due to jitter. Information content was compared before and after cubic convolution resampling and no differences were observed in statistics or separability of basic scene classes.

  1. Cascaded image analysis for dynamic crack detection in material testing

    NASA Astrophysics Data System (ADS)

    Hampel, U.; Maas, H.-G.

    Concrete probes in civil engineering material testing often show fissures or hairline-cracks. These cracks develop dynamically. Starting at a width of a few microns, they usually cannot be detected visually or in an image of a camera imaging the whole probe. Conventional image analysis techniques will detect fissures only if they show a width in the order of one pixel. To be able to detect and measure fissures with a width of a fraction of a pixel at an early stage of their development, a cascaded image analysis approach has been developed, implemented and tested. The basic idea of the approach is to detect discontinuities in dense surface deformation vector fields. These deformation vector fields between consecutive stereo image pairs, which are generated by cross correlation or least squares matching, show a precision in the order of 1/50 pixel. Hairline-cracks can be detected and measured by applying edge detection techniques such as a Sobel operator to the results of the image matching process. Cracks will show up as linear discontinuities in the deformation vector field and can be vectorized by edge chaining. In practical tests of the method, cracks with a width of 1/20 pixel could be detected, and their width could be determined at a precision of 1/50 pixel.

  2. The analysis of image feature robustness using cometcloud

    PubMed Central

    Qi, Xin; Kim, Hyunjoo; Xing, Fuyong; Parashar, Manish; Foran, David J.; Yang, Lin

    2012-01-01

    The robustness of image features is a very important consideration in quantitative image analysis. The objective of this paper is to investigate the robustness of a range of image texture features using hematoxylin stained breast tissue microarray slides which are assessed while simulating different imaging challenges including out of focus, changes in magnification and variations in illumination, noise, compression, distortion, and rotation. We employed five texture analysis methods and tested them while introducing all of the challenges listed above. The texture features that were evaluated include co-occurrence matrix, center-symmetric auto-correlation, texture feature coding method, local binary pattern, and texton. Due to the independence of each transformation and texture descriptor, a network structured combination was proposed and deployed on the Rutgers private cloud. The experiments utilized 20 randomly selected tissue microarray cores. All the combinations of the image transformations and deformations are calculated, and the whole feature extraction procedure was completed in 70 minutes using a cloud equipped with 20 nodes. Center-symmetric auto-correlation outperforms all the other four texture descriptors but also requires the longest computational time. It is roughly 10 times slower than local binary pattern and texton. From a speed perspective, both the local binary pattern and texton features provided excellent performance for classification and content-based image retrieval. PMID:23248759

  3. Texture Analysis of Medical Images Using the Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Fernández, Margarita; Mavilio, Adriana

    2002-08-01

    Texture analysis of images can contribute to a better interpretation of medical images. This type of analysis provides not only qualitative but also quantitative information about tissue affection degree. In this work an algorithm is developed which uses the wavelet transform to carry out the supervised segmentation of echographic images corresponding to injured Achilles tendon of athletes. To construct the pattern, the image corresponding to healthy tendon tissue of the athlete, is taken as a reference based upon the duplicity of this structure. Texture features are calculated on the expansion wavelet coefficients of the images. The Mahalanobis distance between texture samples of the injured tissue and pattern texture is computed and used as the discriminating function. It is concluded that this distance, after appropriate medical calibrations, can offer quantitative information about the injury degree in every point along the damaged tissue. Further, its behavior along the segmented image can serve as a measure of the degree of change in tissue properties. The parameter, similarity degree, is defined and obtained by taking into account the correlation between distance histograms for the healthy tissue and the damaged one. It is also shown that this parameter, when properly calibrated, can offer a quantitative global evaluation of the state of the injured tissue.

  4. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    PubMed

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. PMID:24845059

  5. Computer-aided photometric analysis of dynamic digital bioluminescent images

    NASA Astrophysics Data System (ADS)

    Gorski, Zbigniew; Bembnista, T.; Floryszak-Wieczorek, J.; Domanski, Marek; Slawinski, Janusz

    2003-04-01

    The paper deals with photometric and morphologic analysis of bioluminescent images obtained by registration of light radiated directly from some plant objects. Registration of images obtained from ultra-weak light sources by the single photon counting (SPC) technique is the subject of this work. The radiation is registered by use of a 16-bit charge coupled device (CCD) camera "Night Owl" together with WinLight EG&G Berthold software. Additional application-specific software has been developed in order to deal with objects that are changing during the exposition time. Advantages of the elaborated set of easy configurable tools named FCT for a computer-aided photometric and morphologic analysis of numerous series of quantitatively imperfect chemiluminescent images are described. Instructions are given how to use these tools and exemplified with several algorithms for the transformation of images library. Using the proposed FCT set, automatic photometric and morphologic analysis of the information hidden within series of chemiluminescent images reflecting defensive processes in poinsettia (Euphorbia pulcherrima Willd) leaves affected by a pathogenic fungus Botrytis cinerea is revealed.

  6. Automated analysis of image mammogram for breast cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Nurhasanah, Sampurno, Joko; Faryuni, Irfana Diah; Ivansyah, Okto

    2016-03-01

    Medical imaging help doctors in diagnosing and detecting diseases that attack the inside of the body without surgery. Mammogram image is a medical image of the inner breast imaging. Diagnosis of breast cancer needs to be done in detail and as soon as possible for determination of next medical treatment. The aim of this work is to increase the objectivity of clinical diagnostic by using fractal analysis. This study applies fractal method based on 2D Fourier analysis to determine the density of normal and abnormal and applying the segmentation technique based on K-Means clustering algorithm to image abnormal for determine the boundary of the organ and calculate the area of organ segmentation results. The results show fractal method based on 2D Fourier analysis can be used to distinguish between the normal and abnormal breast and segmentation techniques with K-Means Clustering algorithm is able to generate the boundaries of normal and abnormal tissue organs, so area of the abnormal tissue can be determined.

  7. Methods for spectral image analysis by exploiting spatial simplicity

    DOEpatents

    Keenan, Michael R.

    2010-05-25

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  8. Methods for spectral image analysis by exploiting spatial simplicity

    DOEpatents

    Keenan, Michael R.

    2010-11-23

    Several full-spectrum imaging techniques have been introduced in recent years that promise to provide rapid and comprehensive chemical characterization of complex samples. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful chemical information. Multivariate factor analysis techniques, such as Principal Component Analysis and Alternating Least Squares-based Multivariate Curve Resolution, have proven effective for extracting the essential chemical information from high dimensional spectral image data sets into a limited number of components that describe the spectral characteristics and spatial distributions of the chemical species comprising the sample. There are many cases, however, in which those constraints are not effective and where alternative approaches may provide new analytical insights. For many cases of practical importance, imaged samples are "simple" in the sense that they consist of relatively discrete chemical phases. That is, at any given location, only one or a few of the chemical species comprising the entire sample have non-zero concentrations. The methods of spectral image analysis of the present invention exploit this simplicity in the spatial domain to make the resulting factor models more realistic. Therefore, more physically accurate and interpretable spectral and abundance components can be extracted from spectral images that have spatially simple structure.

  9. Quantitative Medical Image Analysis for Clinical Development of Therapeutics

    NASA Astrophysics Data System (ADS)

    Analoui, Mostafa

    There has been significant progress in development of therapeutics for prevention and management of several disease areas in recent years, leading to increased average life expectancy, as well as of quality of life, globally. However, due to complexity of addressing a number of medical needs and financial burden of development of new class of therapeutics, there is a need for better tools for decision making and validation of efficacy and safety of new compounds. Numerous biological markers (biomarkers) have been proposed either as adjunct to current clinical endpoints or as surrogates. Imaging biomarkers are among rapidly increasing biomarkers, being examined to expedite effective and rational drug development. Clinical imaging often involves a complex set of multi-modality data sets that require rapid and objective analysis, independent of reviewer's bias and training. In this chapter, an overview of imaging biomarkers for drug development is offered, along with challenges that necessitate quantitative and objective image analysis. Examples of automated and semi-automated analysis approaches are provided, along with technical review of such methods. These examples include the use of 3D MRI for osteoarthritis, ultrasound vascular imaging, and dynamic contrast enhanced MRI for oncology. Additionally, a brief overview of regulatory requirements is discussed. In conclusion, this chapter highlights key challenges and future directions in this area.

  10. RGB calibration for color image analysis in machine vision.

    PubMed

    Chang, Y C; Reid, J F

    1996-01-01

    A color calibration method for correcting the variations in RGB color values caused by vision system components was developed and tested in this study. The calibration scheme concentrated on comprehensively estimating and removing the RGB errors without specifying error sources and their effects. The algorithm for color calibration was based upon the use of a standardized color chart and developed as a preprocessing tool for color image analysis. According to the theory of image formation, RGB errors in color images were categorized into multiplicative and additive errors. Multiplicative and additive errors contained various error sources-gray-level shift, a variation in amplification and quantization in camera electronics or frame grabber, the change of color temperature of illumination with time, and related factors. The RGB errors of arbitrary colors in an image were estimated from the RGB errors of standard colors contained in the image. The color calibration method also contained an algorithm for correcting the nonuniformity of illumination in the scene. The algorithm was tested under two different conditions-uniform and nonuniform illumination in the scene. The RGB errors of arbitrary colors in test images were almost completely removed after color calibration. The maximum residual error was seven gray levels under uniform illumination and 12 gray levels under nonuniform illumination. Most residual RGB errors were caused by residual nonuniformity of illumination in images, The test results showed that the developed method was effective in correcting the variations in RGB color values caused by vision system components. PMID:18290059

  11. Non-Harmonic Analysis Applied to Optical Coherence Tomography Imaging

    NASA Astrophysics Data System (ADS)

    Cao, Xu; Uchida, Tetsuya; Hirobayashi, Shigeki; Chong, Changho; Morosawa, Atsushi; Totsuka, Koki; Suzuki, Takuya

    2012-02-01

    A new processing technique called non-harmonic analysis (NHA) is proposed for optical coherence tomography (OCT) imaging. Conventional Fourier-domain OCT employs the discrete Fourier transform (DFT), which depends on the window function and length. The axial resolution of the OCT image, calculated by using DFT, is inversely proportional to the full width at half maximum (FWHM) of the wavelength range. The FWHM of wavelength range is limited by the sweeping range of the source in swept-source OCT and it is limited by the number of CCD pixels in spectral-domain OCT. However, the NHA process does not have such constraints; NHA can resolve high frequencies irrespective of the window function and the frame length of the sampled data. In this study, the NHA process is described and it is applied to OCT imaging. It is compared with OCT images based on the DFT. To demonstrate the benefits of using NHA for OCT, we perform OCT imaging with NHA of an onion skin. The results reveal that NHA can achieve an image resolution equivalent that of a 100-nm sweep range using a significantly reduced wavelength range. They also reveal the potential of using this technique to achieve high-resolution imaging without using a broadband source. However, the long calculation times required for NHA must be addressed if it is to be used in clinical applications.

  12. Infrared medical image visualization and anomalies analysis method

    NASA Astrophysics Data System (ADS)

    Gong, Jing; Chen, Zhong; Fan, Jing; Yan, Liang

    2015-12-01

    Infrared medical examination finds the diseases through scanning the overall human body temperature and obtaining the temperature anomalies of the corresponding parts with the infrared thermal equipment. In order to obtain the temperature anomalies and disease parts, Infrared Medical Image Visualization and Anomalies Analysis Method is proposed in this paper. Firstly, visualize the original data into a single channel gray image: secondly, turn the normalized gray image into a pseudo color image; thirdly, a method of background segmentation is taken to filter out background noise; fourthly, cluster those special pixels with the breadth-first search algorithm; lastly, mark the regions of the temperature anomalies or disease parts. The test is shown that it's an efficient and accurate way to intuitively analyze and diagnose body disease parts through the temperature anomalies.

  13. Bayesian Analysis and Segmentation of Multichannel Image Sequences

    NASA Astrophysics Data System (ADS)

    Chang, Michael Ming Hsin

    This thesis is concerned with the segmentation and analysis of multichannel image sequence data. In particular, we use maximum a posteriori probability (MAP) criterion and Gibbs random fields (GRF) to formulate the problems. We start by reviewing the significance of MAP estimation with GRF priors and study the feasibility of various optimization methods for implementing the MAP estimator. We proceed to investigate three areas where image data and parameter estimates are present in multichannels, multiframes, and interrelated in complicated manners. These areas of study include color image segmentation, multislice MR image segmentation, and optical flow estimation and segmentation in multiframe temporal sequences. Besides developing novel algorithms in each of these areas, we demonstrate how to exploit the potential of MAP estimation and GRFs, and we propose practical and efficient implementations. Illustrative examples and relevant experimental results are included.

  14. Hyperspectral imaging for non-contact analysis of forensic traces.

    PubMed

    Edelman, G J; Gaston, E; van Leeuwen, T G; Cullen, P J; Aalders, M C G

    2012-11-30

    Hyperspectral imaging (HSI) integrates conventional imaging and spectroscopy, to obtain both spatial and spectral information from a specimen. This technique enables investigators to analyze the chemical composition of traces and simultaneously visualize their spatial distribution. HSI offers significant potential for the detection, visualization, identification and age estimation of forensic traces. The rapid, non-destructive and non-contact features of HSI mark its suitability as an analytical tool for forensic science. This paper provides an overview of the principles, instrumentation and analytical techniques involved in hyperspectral imaging. We describe recent advances in HSI technology motivating forensic science applications, e.g. the development of portable and fast image acquisition systems. Reported forensic science applications are reviewed. Challenges are addressed, such as the analysis of traces on backgrounds encountered in casework, concluded by a summary of possible future applications. PMID:23088824

  15. Modulation-transfer-function analysis for sampled image systems

    NASA Technical Reports Server (NTRS)

    Park, S. K.; Kaczynski, M.-A.; Schowengerdt, R.

    1984-01-01

    Sampling generally causes the response of a digital imaging system to be locally shift-variant and not directly amenable to Modulation Transfer Function (MTF) analysis. However, this paper demonstrates that a meaningful system response can be calculated by averaging over an ensemble of point-source system inputs to yield an MTF which accounts for the combined effects of image formation, sampling, and image reconstruction. As an illustration, the MTF of the Landsat MSS system is analyzed to reveal an average effective instantaneous field of view which is significantly larger than the commonly accepted value, particularly in the along-track direction where undersampling contributes markedly to an MTF reduction and resultant increase in image blur.

  16. Undersampled dynamic magnetic resonance imaging using kernel principal component analysis.

    PubMed

    Wang, Yanhua; Ying, Leslie

    2014-01-01

    Compressed sensing (CS) is a promising approach to accelerate dynamic magnetic resonance imaging (MRI). Most existing CS methods employ linear sparsifying transforms. The recent developments in non-linear or kernel-based sparse representations have been shown to outperform the linear transforms. In this paper, we present an iterative non-linear CS dynamic MRI reconstruction framework that uses the kernel principal component analysis (KPCA) to exploit the sparseness of the dynamic image sequence in the feature space. Specifically, we apply KPCA to represent the temporal profiles of each spatial location and reconstruct the images through a modified pre-image problem. The underlying optimization algorithm is based on variable splitting and fixed-point iteration method. Simulation results show that the proposed method outperforms conventional CS method in terms of aliasing artifact reduction and kinetic information preservation. PMID:25570262

  17. Generation and Analysis of Wire Rope Digital Radiographic Images

    NASA Astrophysics Data System (ADS)

    Chakhlov, S.; Anpilogov, P.; Batranin, A.; Osipov, S.; Zhumabekova, Sh; Yadrenkin, I.

    2016-06-01

    The paper is dealt with different structures of the digital radiographic system intended for wire rope radiography. The scanning geometry of the wire rope is presented and the main stages of its digital radiographic image generation are identified herein. Correction algorithms are suggested for X-ray beam hardening. A complex internal structure of the wire rope is illustrated by its 25 mm diameter image obtained from X-ray computed tomography. The paper considers the approach to the analysis of digital radiographic image algorithms based on the closeness of certain parameters (invariants) of all unit cross-sections of the reference wire rope or its sections with the length equaling to the lay. The main invariants of wire rope radiographic images are identified and compared with its typical defects.

  18. Geometric error analysis for shuttle imaging spectrometer experiment

    NASA Technical Reports Server (NTRS)

    Wang, S. J.; Ih, C. H.

    1984-01-01

    The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

  19. Lung nodules detection in chest radiography: image components analysis

    NASA Astrophysics Data System (ADS)

    Luo, Tao; Mou, Xuanqin; Yang, Ying; Yan, Hao

    2009-02-01

    We aimed to evaluate the effect of different components of chest image on performances of both human observer and channelized Fisher-Hotelling model (CFH) in nodule detection task. Irrelevant and relevant components were separated from clinical chest radiography by employing Principal Component Analysis (PCA) methods. Human observer performance was evaluated in two-alternative forced-choice (2AFC) on original clinical images and anatomical structure only images obtained by PCA methods. Channelized Fisher-Hotelling model with Laguerre-Gauss basis function was evaluated to predict human performance. We show that relevant component is the primary factor influencing on nodule detection in chest radiography. There is obvious difference of detectability between human observer and CFH model for nodule detection in images only containing anatomical structure. CFH model should be used more carefully.

  20. Aural analysis of image texture via cepstral filtering and sonification

    NASA Astrophysics Data System (ADS)

    Rangayyan, Rangaraj M.; Martins, Antonio C. G.; Ruschioni, Ruggero A.

    1996-03-01

    Texture plays an important role in image analysis and understanding, with many applications in medical imaging and computer vision. However, analysis of texture by image processing is a rather difficult issue, with most techniques being oriented towards statistical analysis which may not have readily comprehensible perceptual correlates. We propose new methods for auditory display (AD) and sonification of (quasi-) periodic texture (where a basic texture element or `texton' is repeated over the image field) and random texture (which could be modeled as filtered or `spot' noise). Although the AD designed is not intended to be speech- like or musical, we draw analogies between the two types of texture mentioned above and voiced/unvoiced speech, and design a sonification algorithm which incorporates physical and perceptual concepts of texture and speech. More specifically, we present a method for AD of texture where the projections of the image at various angles (Radon transforms or integrals) are mapped to audible signals and played in sequence. In the case of random texture, the spectral envelopes of the projections are related to the filter spot characteristics, and convey the essential information for texture discrimination. In the case of periodic texture, the AD provides timber and pitch related to the texton and periodicity. In another procedure for sonification of periodic texture, we propose to first deconvolve the image using cepstral analysis to extract information about the texton and horizontal and vertical periodicities. The projections of individual textons at various angles are used to create a voiced-speech-like signal with each projection mapped to a basic wavelet, the horizontal period to pitch, and the vertical period to rhythm on a longer time scale. The sound pattern then consists of a serial, melody-like sonification of the patterns for each projection. We believe that our approaches provide the much-desired `natural' connection between the image

  1. Multispectral retinal image analysis: a novel non-invasive tool for retinal imaging

    PubMed Central

    Calcagni, A; Gibson, J M; Styles, I B; Claridge, E; Orihuela-Espina, F

    2011-01-01

    Purpose To develop a non-invasive method for quantification of blood and pigment distributions across the posterior pole of the fundus from multispectral images using a computer-generated reflectance model of the fundus. Methods A computer model was developed to simulate light interaction with the fundus at different wavelengths. The distribution of macular pigment (MP) and retinal haemoglobins in the fundus was obtained by comparing the model predictions with multispectral image data at each pixel. Fundus images were acquired from 16 healthy subjects from various ethnic backgrounds and parametric maps showing the distribution of MP and of retinal haemoglobins throughout the posterior pole were computed. Results The relative distributions of MP and retinal haemoglobins in the subjects were successfully derived from multispectral images acquired at wavelengths 507, 525, 552, 585, 596, and 611 nm, providing certain conditions were met and eye movement between exposures was minimal. Recovery of other fundus pigments was not feasible and further development of the imaging technique and refinement of the software are necessary to understand the full potential of multispectral retinal image analysis. Conclusion The distributions of MP and retinal haemoglobins obtained in this preliminary investigation are in good agreement with published data on normal subjects. The ongoing development of the imaging system should allow for absolute parameter values to be computed. A further study will investigate subjects with known pathologies to determine the effectiveness of the method as a screening and diagnostic tool. PMID:21904394

  2. Hyperspectral fluorescence imaging coupled with multivariate image analysis techniques for contaminant screening of leafy greens

    NASA Astrophysics Data System (ADS)

    Everard, Colm D.; Kim, Moon S.; Lee, Hoyoung

    2014-05-01

    The production of contaminant free fresh fruit and vegetables is needed to reduce foodborne illnesses and related costs. Leafy greens grown in the field can be susceptible to fecal matter contamination from uncontrolled livestock and wild animals entering the field. Pathogenic bacteria can be transferred via fecal matter and several outbreaks of E.coli O157:H7 have been associated with the consumption of leafy greens. This study examines the use of hyperspectral fluorescence imaging coupled with multivariate image analysis to detect fecal contamination on Spinach leaves (Spinacia oleracea). Hyperspectral fluorescence images from 464 to 800 nm were captured; ultraviolet excitation was supplied by two LED-based line light sources at 370 nm. Key wavelengths and algorithms useful for a contaminant screening optical imaging device were identified and developed, respectively. A non-invasive screening device has the potential to reduce the harmful consequences of foodborne illnesses.

  3. Robust approach to ocular fundus image analysis

    NASA Astrophysics Data System (ADS)

    Tascini, Guido; Passerini, Giorgio; Puliti, Paolo; Zingaretti, Primo

    1993-07-01

    The analysis of morphological and structural modifications of retinal blood vessels plays an important role both to establish the presence of some systemic diseases as hypertension and diabetes and to study their course. The paper describes a robust set of techniques developed to quantitatively evaluate morphometric aspects of the ocular fundus vascular and micro vascular network. They are defined: (1) the concept of 'Local Direction of a vessel' (LD); (2) a special form of edge detection, named Signed Edge Detection (SED), which uses LD to choose the convolution kernel in the edge detection process and is able to distinguish between the left or the right vessel edge; (3) an iterative tracking (IT) method. The developed techniques use intensively both LD and SED in: (a) the automatic detection of number, position and size of blood vessels departing from the optical papilla; (b) the tracking of body and edges of the vessels; (c) the recognition of vessel branches and crossings; (d) the extraction of a set of features as blood vessel length and average diameter, arteries and arterioles tortuosity, crossing position and angle between two vessels. The algorithms, implemented in C language, have an execution time depending on the complexity of the currently processed vascular network.

  4. Failure Analysis of CCD Image Sensors Using SQUID and GMR Magnetic Current Imaging

    NASA Technical Reports Server (NTRS)

    Felt, Frederick S.

    2005-01-01

    During electrical testing of a Full Field CCD Image Senor, electrical shorts were detected on three of six devices. These failures occurred after the parts were soldered to the PCB. Failure analysis was performed to determine the cause and locations of these failures on the devices. After removing the fiber optic faceplate, optical inspection was performed on the CCDs to understand the design and package layout. Optical inspection revealed that the device had a light shield ringing the CCD array. This structure complicated the failure analysis. Alternate methods of analysis were considered, including liquid crystal, light and thermal emission, LT/A, TT/A SQUID, and MP. Of these, SQUID and MP techniques were pursued for further analysis. Also magnetoresistive current imaging technology is discussed and compared to SQUID.

  5. A computational framework for exploratory data analysis in biomedical imaging

    NASA Astrophysics Data System (ADS)

    Wismueller, Axel

    2009-02-01

    Purpose: To develop, test, and evaluate a novel unsupervised machine learning method for the analysis of multidimensional biomedical imaging data. Methods: The Exploration Machine (XOM) is introduced as a method for computing low-dimensional representations of high-dimensional observations. XOM systematically inverts functional and structural components of topology-preserving mappings. Thus, it can contribute to both structure-preserving visualization and data clustering. We applied XOM to the analysis of microarray imaging data of gene expression profiles in Saccharomyces cerevisiae, and to model-free analysis of functional brain MRI data by unsupervised clustering. For both applications, we performed quantitative comparisons to results obtained by established algorithms. Results: Genome data: Absolute (relative) Sammon error values were 2.21 Â. 103 (1.00) for XOM, 2.45 Â. 103 (1.11) for Sammon's mapping, 2.77 Â. 103 (1.25) for Locally Linear Embedding (LLE), 2.82 Â. 103 (1.28) for PCA, 3.36 Â. 103 (1.52) for Isomap, and 10.19 Â. 103(4.61) for Self-Organizing Map (SOM). - Functional MRI data: Areas under ROC curves for detection of task-related brain activation were 0.984 +/- 0.03 for XOM, 0.983 +/- 0.02 for Minimal-Free-Energy VQ, and 0.979 +/- 0.02 for SOM. Conclusion: We introduce the Exploration Machine as a novel machine learning method for the analysis of multidimensional biomedical imaging data. XOM can be successfully applied to microarray gene expression analysis and to clustering of functional brain MR image time-series. By simultaneously contributing to dimensionality reduction and data clustering, XOM is a useful novel method for data analysis in biomedical imaging.

  6. Secure thin client architecture for DICOM image analysis

    NASA Astrophysics Data System (ADS)

    Mogatala, Harsha V. R.; Gallet, Jacqueline

    2005-04-01

    This paper presents a concept of Secure Thin Client (STC) Architecture for Digital Imaging and Communications in Medicine (DICOM) image analysis over Internet. STC Architecture provides in-depth analysis and design of customized reports for DICOM images using drag-and-drop and data warehouse technology. Using a personal computer and a common set of browsing software, STC can be used for analyzing and reporting detailed patient information, type of examinations, date, Computer Tomography (CT) dose index, and other relevant information stored within the images header files as well as in the hospital databases. STC Architecture is three-tier architecture. The First-Tier consists of drag-and-drop web based interface and web server, which provides customized analysis and reporting ability to the users. The Second-Tier consists of an online analytical processing (OLAP) server and database system, which serves fast, real-time, aggregated multi-dimensional data using OLAP technology. The Third-Tier consists of a smart algorithm based software program which extracts DICOM tags from CT images in this particular application, irrespective of CT vendor's, and transfers these tags into a secure database system. This architecture provides Winnipeg Regional Health Authorities (WRHA) with quality indicators for CT examinations in the hospitals. It also provides health care professionals with analytical tool to optimize radiation dose and image quality parameters. The information is provided to the user by way of a secure socket layer (SSL) and role based security criteria over Internet. Although this particular application has been developed for WRHA, this paper also discusses the effort to extend the Architecture to other hospitals in the region. Any DICOM tag from any imaging modality could be tracked with this software.

  7. Imaging spectroscopic analysis at the Advanced Light Source

    SciTech Connect

    MacDowell, A. A.; Warwick, T.; Anders, S.; Lamble, G.M.; Martin, M.C.; McKinney, W.R.; Padmore, H.A.

    1999-05-12

    One of the major advances at the high brightness third generation synchrotrons is the dramatic improvement of imaging capability. There is a large multi-disciplinary effort underway at the ALS to develop imaging X-ray, UV and Infra-red spectroscopic analysis on a spatial scale from. a few microns to 10nm. These developments make use of light that varies in energy from 6meV to 15KeV. Imaging and spectroscopy are finding applications in surface science, bulk materials analysis, semiconductor structures, particulate contaminants, magnetic thin films, biology and environmental science. This article is an overview and status report from the developers of some of these techniques at the ALS. The following table lists all the currently available microscopes at the. ALS. This article will describe some of the microscopes and some of the early applications.

  8. Three-dimensional image analysis of the mouse cochlea.

    PubMed

    Wright, Graham D; Horn, Henning F

    2016-01-01

    The mouse has proven to be an essential model system for studying hearing loss. A key advantage of the mouse is the ability to image the sensory cells in the cochlea. Many different protocols exist for the dissection and imaging of the cochlea. Here we describe a method that utilizes confocal imaging of whole-mount preparations followed by 3D analysis using the Imaris software. The 3D analysis of confocal stacks has been successfully used for investigating a number of mouse tissues and developmental processes. We propose that this method is also a valuable tool to analyze the cellular and tissue organization of the sensory hair cells in the cochlea. PMID:26786803

  9. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  10. Automated rice leaf disease detection using color image analysis

    NASA Astrophysics Data System (ADS)

    Pugoy, Reinald Adrian D. L.; Mariano, Vladimir Y.

    2011-06-01

    In rice-related institutions such as the International Rice Research Institute, assessing the health condition of a rice plant through its leaves, which is usually done as a manual eyeball exercise, is important to come up with good nutrient and disease management strategies. In this paper, an automated system that can detect diseases present in a rice leaf using color image analysis is presented. In the system, the outlier region is first obtained from a rice leaf image to be tested using histogram intersection between the test and healthy rice leaf images. Upon obtaining the outlier, it is then subjected to a threshold-based K-means clustering algorithm to group related regions into clusters. Then, these clusters are subjected to further analysis to finally determine the suspected diseases of the rice leaf.

  11. Stromatoporoid biometrics using image analysis software: A first order approach

    NASA Astrophysics Data System (ADS)

    Wolniewicz, Pawel

    2010-04-01

    Strommetric is a new image analysis computer program that performs morphometric measurements of stromatoporoid sponges. The program measures 15 features of skeletal elements (pillars and laminae) visible in both longitudinal and transverse thin sections. The software is implemented in C++, using the Open Computer Vision (OpenCV) library. The image analysis system distinguishes skeletal elements from sparry calcite using Otsu's method for image thresholding. More than 150 photos of thin sections were used as a test set, from which 36,159 measurements were obtained. The software provided about one hundred times more data than the current method applied until now. The data obtained are reproducible, even if the work is repeated by different workers. Thus the method makes the biometric studies of stromatoporoids objective.

  12. Statistical Analysis of speckle noise reduction techniques for echocardiographic Images

    NASA Astrophysics Data System (ADS)

    Saini, Kalpana; Dewal, M. L.; Rohit, Manojkumar

    2011-12-01

    Echocardiography is the safe, easy and fast technology for diagnosing the cardiac diseases. As in other ultrasound images these images also contain speckle noise. In some cases this speckle noise is useful such as in motion detection. But in general noise removal is required for better analysis of the image and proper diagnosis. Different Adaptive and anisotropic filters are included for statistical analysis. Statistical parameters such as Signal-to-Noise Ratio (SNR), Peak Signal-to-Noise Ratio (PSNR), and Root Mean Square Error (RMSE) calculated for performance measurement. One more important aspect that there may be blurring during speckle noise removal. So it is prefered that filter should be able to enhance edges during noise removal.

  13. Diagnosis of cutaneous thermal burn injuries by multispectral imaging analysis

    NASA Technical Reports Server (NTRS)

    Anselmo, V. J.; Zawacki, B. E.

    1978-01-01

    Special photographic or television image analysis is shown to be a potentially useful technique to assist the physician in the early diagnosis of thermal burn injury. A background on the medical and physiological problems of burns is presented. The proposed methodology for burns diagnosis from both the theoretical and clinical points of view is discussed. The television/computer system constructed to accomplish this analysis is described, and the clinical results are discussed.

  14. Fractal image analysis - Application to the topography of Oregon and synthetic images.

    NASA Technical Reports Server (NTRS)

    Huang, Jie; Turcotte, Donald L.

    1990-01-01

    Digitized topography for the state of Oregon has been used to obtain maps of fractal dimension and roughness amplitude. The roughness amplitude correlates well with variations in relief and is a promising parameter for the quantitative classification of landforms. The spatial variations in fractal dimension are low and show no clear correlation with different tectonic settings. For Oregon the mean fractal dimension from a two-dimensional spectral analysis is D = 2.586, and for a one-dimensional spectral analysis the mean fractal dimension is D = 1.487, which is close to the Brown noise value D = 1.5. Synthetic two-dimensional images have also been generated for a range of D values. For D = 2.6, the synthetic image has a mean one-dimensional spectral fractal dimension D = 1.58, which is consistent with the results for Oregon. This approach can be easily applied to any digitzed image that obeys fractal statistics.

  15. Automating the assessment of citrus canker symptoms with image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Citrus canker (CC, caused by Xanthomonas citri) is a serious disease of citrus in Florida and other citrus-growing regions. Severity of symptoms can be estimated by visual rating, but there is inter- and intra-rater variation. Automated image analysis (IA) may offer a way of reducing some of ...

  16. The Medical Analysis of Child Sexual Abuse Images

    ERIC Educational Resources Information Center

    Cooper, Sharon W.

    2011-01-01

    Analysis of child sexual abuse images, commonly referred to as pornography, requires a familiarity with the sexual maturation rating of children and an understanding of growth and development parameters. This article explains barriers that exist in working in this area of child abuse, the differences between subjective and objective analyses,…

  17. Scanning probe image wizard: A toolbox for automated scanning probe microscopy data analysis

    NASA Astrophysics Data System (ADS)

    Stirling, Julian; Woolley, Richard A. J.; Moriarty, Philip

    2013-11-01

    We describe SPIW (scanning probe image wizard), a new image processing toolbox for SPM (scanning probe microscope) images. SPIW can be used to automate many aspects of SPM data analysis, even for images with surface contamination and step edges present. Specialised routines are available for images with atomic or molecular resolution to improve image visualisation and generate statistical data on surface structure.

  18. Quantifying biodiversity using digital cameras and automated image analysis.

    NASA Astrophysics Data System (ADS)

    Roadknight, C. M.; Rose, R. J.; Barber, M. L.; Price, M. C.; Marshall, I. W.

    2009-04-01

    Monitoring the effects on biodiversity of extensive grazing in complex semi-natural habitats is labour intensive. There are also concerns about the standardization of semi-quantitative data collection. We have chosen to focus initially on automating the most time consuming aspect - the image analysis. The advent of cheaper and more sophisticated digital camera technology has lead to a sudden increase in the number of habitat monitoring images and information that is being collected. We report on the use of automated trail cameras (designed for the game hunting market) to continuously capture images of grazer activity in a variety of habitats at Moor House National Nature Reserve, which is situated in the North of England at an average altitude of over 600m. Rainfall is high, and in most areas the soil consists of deep peat (1m to 3m), populated by a mix of heather, mosses and sedges. The cameras have been continuously in operation over a 6 month period, daylight images are in full colour and night images (IR flash) are black and white. We have developed artificial intelligence based methods to assist in the analysis of the large number of images collected, generating alert states for new or unusual image conditions. This paper describes the data collection techniques, outlines the quantitative and qualitative data collected and proposes online and offline systems that can reduce the manpower overheads and increase focus on important subsets in the collected data. By converting digital image data into statistical composite data it can be handled in a similar way to other biodiversity statistics thus improving the scalability of monitoring experiments. Unsupervised feature detection methods and supervised neural methods were tested and offered solutions to simplifying the process. Accurate (85 to 95%) categorization of faunal content can be obtained, requiring human intervention for only those images containing rare animals or unusual (undecidable) conditions, and

  19. Modulation of retinal image vasculature analysis to extend utility and provide secondary value from optical coherence tomography imaging.

    PubMed

    Cameron, James R; Ballerini, Lucia; Langan, Clare; Warren, Claire; Denholm, Nicholas; Smart, Katie; MacGillivray, Thomas J

    2016-04-01

    Retinal image analysis is emerging as a key source of biomarkers of chronic systemic conditions affecting the cardiovascular system and brain. The rapid development and increasing diversity of commercial retinal imaging systems present a challenge to image analysis software providers. In addition, clinicians are looking to extract maximum value from the clinical imaging taking place. We describe how existing and well-established retinal vasculature segmentation and measurement software for fundus camera images has been modulated to analyze scanning laser ophthalmoscope retinal images generated by the dual-modality Heidelberg SPECTRALIS(®) instrument, which also features optical coherence tomography. PMID:27175375

  20. Proceedings of the Second Annual Symposium on Mathematical Pattern Recognition and Image Analysis Program

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1984-01-01

    Several papers addressing image analysis and pattern recognition techniques for satellite imagery are presented. Texture classification, image rectification and registration, spatial parameter estimation, and surface fitting are discussed.

  1. Ganalyzer: A Tool for Automatic Galaxy Image Analysis

    NASA Astrophysics Data System (ADS)

    Shamir, Lior

    2011-08-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  2. Quantitative Computed Tomography and Image Analysis for Advanced Muscle Assessment

    PubMed Central

    Edmunds, Kyle Joseph; Gíslason, Magnus K.; Arnadottir, Iris D.; Marcante, Andrea; Piccione, Francesco; Gargiulo, Paolo

    2016-01-01

    Medical imaging is of particular interest in the field of translational myology, as extant literature describes the utilization of a wide variety of techniques to non-invasively recapitulate and quantity various internal and external tissue morphologies. In the clinical context, medical imaging remains a vital tool for diagnostics and investigative assessment. This review outlines the results from several investigations on the use of computed tomography (CT) and image analysis techniques to assess muscle conditions and degenerative process due to aging or pathological conditions. Herein, we detail the acquisition of spiral CT images and the use of advanced image analysis tools to characterize muscles in 2D and 3D. Results from these studies recapitulate changes in tissue composition within muscles, as visualized by the association of tissue types to specified Hounsfield Unit (HU) values for fat, loose connective tissue or atrophic muscle, and normal muscle, including fascia and tendon. We show how results from these analyses can be presented as both average HU values and compositions with respect to total muscle volumes, demonstrating the reliability of these tools to monitor, assess and characterize muscle degeneration. PMID:27478562

  3. Positron emission tomography: physics, instrumentation, and image analysis.

    PubMed

    Porenta, G

    1994-01-01

    Positron emission tomography (PET) is a noninvasive diagnostic technique that permits reconstruction of cross-sectional images of the human body which depict the biodistribution of PET tracer substances. A large variety of physiological PET tracers, mostly based on isotopes of carbon, nitrogen, oxygen, and fluorine is available and allows the in vivo investigation of organ perfusion, metabolic pathways and biomolecular processes in normal and diseased states. PET cameras utilize the physical characteristics of positron decay to derive quantitative measurements of tracer concentrations, a capability that has so far been elusive for conventional SPECT (single photon emission computed tomography) imaging techniques. Due to the short half lives of most PET isotopes, an on-site cyclotron and a radiochemistry unit are necessary to provide an adequate supply of PET tracers. While operating a PET center in the past was a complex procedure restricted to few academic centers with ample resources, PET technology has rapidly advanced in recent years and has entered the commercial nuclear medicine market. To date, the availability of compact cyclotrons with remote computer control, automated synthesis units for PET radiochemistry, high-performance PET cameras, and user-friendly analysis workstations permits installation of a clinical PET center within most nuclear medicine facilities. This review provides simple descriptions of important aspects concerning physics, instrumentation, and image analysis in PET imaging which should be understood by medical personnel involved in the clinical operation of a PET imaging center. PMID:7941595

  4. Discriminating enumeration of subseafloor life using automated fluorescent image analysis

    NASA Astrophysics Data System (ADS)

    Morono, Y.; Terada, T.; Masui, N.; Inagaki, F.

    2008-12-01

    Enumeration of microbial cells in marine subsurface sediments has provided fundamental information for understanding the extent of life and deep-biosphere on Earth. The microbial population has been manually evaluated by direct cell count under the microscopy because the recognition of cell-derived fluorescent signals has been extremely difficult. Here, we improved the conventional method by removing the non- specific fluorescent backgrounds and enumerated the cell population in sediments using a newly developed automated microscopic imaging system. Although SYBR Green I is known to specifically bind to the double strand DNA (Lunau et al., 2005), we still observed some SYBR-stainable particulate matters (SYBR-SPAMs) in the heat-sterilized control sediments (450°C, 6h), which assumed to be silicates or mineralized organic matters. Newly developed acid-wash treatments with hydrofluoric acid (HF) followed by image analysis successfully removed these background objects and yielded artifact-free microscopic images. To obtain statistically meaningful fluorescent images, we constructed a computer-assisted automated cell counting system. Given the comparative data set of cell abundance in acid-washed marine sediments evaluated by SYBR Green I- and acridine orange (AO)-stain with and without the image analysis, our protocol could provide the statistically meaningful absolute numbers of discriminating cell-derived fluorescent signals.

  5. GANALYZER: A TOOL FOR AUTOMATIC GALAXY IMAGE ANALYSIS

    SciTech Connect

    Shamir, Lior

    2011-08-01

    We describe Ganalyzer, a model-based tool that can automatically analyze and classify galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze {approx}10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large data sets of galaxy images collected by autonomous sky surveys such as SDSS, LSST, or DES. The software is available for free download at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer, and the data used in the experiment are available at http://vfacstaff.ltu.edu/lshamir/downloads/ganalyzer/GalaxyImages.zip.

  6. Multispectral imaging fluorescence microscopy for lymphoid tissue analysis

    NASA Astrophysics Data System (ADS)

    Monici, Monica; Agati, Giovanni; Fusi, Franco; Mazzinghi, Piero; Romano, Salvatore; Pratesi, Riccardo; Alterini, Renato; Bernabei, Pietro A.; Rigacci, Luigi

    1999-01-01

    Multispectral imaging autofluorescence microscopy (MIAM) is used here for the analysis of lymphatic tissues. Lymph node biopsies, from patients with lympthoadenopathy of different origin have been examined. Natural fluorescence (NF) images of 3 micrometers sections were obtained using three filters peaked at 450, 550 and 680 nm with 50 nm bandpass. Monochrome images were combined together in a single RGB image. NF images of lymph node tissue sections show intense blue-green fluorescence of the connective stroma. Normal tissue shows follicles with faintly fluorescent lymphocytes, as expected fro the morphologic and functional characteristics of these cells. Other more fluorescent cells (e.g., plasma cells and macrophages) are evidenced. Intense green fluorescence if localized in the inner wall of the vessels. Tissues coming from patients affected by Hodgkin's lymphoma show spread fluorescence due to connective infiltration and no evidence of follicle organization. Brightly fluorescent large cells, presumably Hodgkin cells, are also observed. These results indicate that MIAM can discriminate between normal and pathological tissues on the basis of their natural fluorescence pattern, and, therefore, represent a potentially useful technique for diagnostic applications. Analysis of the fluorescence spectra of both normal and malignant lymphoid tissues resulted much less discriminatory than MIAM.

  7. Difference image analysis: automatic kernel design using information criteria

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.

    2016-03-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

  8. Analysis of pixel circuits in CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Mei, Zou; Chen, Nan; Yao, Li-bin

    2015-04-01

    CMOS image sensors (CIS) have lower power consumption, lower cost and smaller size than CCD image sensors. However, generally CCDs have higher performance than CIS mainly due to lower noise. The pixel circuit used in CIS is the first part of the signal processing circuit and connected to photodiode directly, so its performance will greatly affect the CIS or even the whole imaging system. To achieve high performance, CMOS image sensors need advanced pixel circuits. There are many pixel circuits used in CIS, such as passive pixel sensor (PPS), 3T and 4T active pixel sensor (APS), capacitive transimpedance amplifier (CTIA), and passive pixel sensor (PPS). At first, the main performance parameters of each pixel structure including the noise, injection efficiency, sensitivity, power consumption, and stability of bias voltage are analyzed. Through the theoretical analysis of those pixel circuits, it is concluded that CTIA pixel circuit has good noise performance, high injection efficiency, stable photodiode bias, and high sensitivity with small integrator capacitor. Furthermore, the APS and CTIA pixel circuits are simulated in a standard 0.18-μm CMOS process and using a n-well/p-sub photodiode by SPICE and the simulation result confirms the theoretical analysis result. It shows the possibility that CMOS image sensors can be extended to a wide range of applications requiring high performance.

  9. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    NASA Astrophysics Data System (ADS)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  10. Texture image analysis: application to the classification of bovine muscles from meat slice images

    NASA Astrophysics Data System (ADS)

    Basset, Olivier; Dupont, Florent; Hernandez, Ange; Odet, Christophe; Abouelkaram, Said; Culioli, Joseph

    1999-11-01

    Image texture is analyzed to provide a series of features for the classification of several sets of images. Images of meat slices are processed to classify various samples of bovine muscle as a function of three factors: animal age, muscle and castration. The different images present a particular texture that is global representation of the connective tissue. The aim of texture analysis is to extract specific features for each kind of meat. The meat slices available for this study came from 19 animals, including 10 castrated animals. Their ages were 4 months (10 animals), 12 months (5 animals) and 16 months (4 animals). The same three muscles were studied for each animal. The texture analysis was carried out on digitized images using the first- and second-order statistics of the gray levels and morphological parameters, for the characterization of the marbling. Two classification methods were implemented: the method of a k- nearest neighbors and a method based on neural networks. Both methods give comparable results and lead to satisfactory classification of the samples in relation to the three variation factors. The correlation of the textural features with chemical and mechanical parameters measured on the meat samples is also examined. Regression experiments show that textural features have potential to indicate meat characteristics.

  11. Computerized image analysis for acetic acid induced intraepithelial lesions

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Ferris, Daron G.; Lieberman, Rich W.

    2008-03-01

    Cervical Intraepithelial Neoplasia (CIN) exhibits certain morphologic features that can be identified during a visual inspection exam. Immature and dysphasic cervical squamous epithelium turns white after application of acetic acid during the exam. The whitening process occurs visually over several minutes and subjectively discriminates between dysphasic and normal tissue. Digital imaging technologies allow us to assist the physician analyzing the acetic acid induced lesions (acetowhite region) in a fully automatic way. This paper reports a study designed to measure multiple parameters of the acetowhitening process from two images captured with a digital colposcope. One image is captured before the acetic acid application, and the other is captured after the acetic acid application. The spatial change of the acetowhitening is extracted using color and texture information in the post acetic acid image; the temporal change is extracted from the intensity and color changes between the post acetic acid and pre acetic acid images with an automatic alignment. The imaging and data analysis system has been evaluated with a total of 99 human subjects and demonstrate its potential to screening underserved women where access to skilled colposcopists is limited.

  12. Hyperspectral imaging and quantitative analysis for prostate cancer detection

    PubMed Central

    Akbari, Hamed; Halig, Luma V.; Schuster, David M.; Osunkoya, Adeboye; Master, Viraj; Nieh, Peter T.; Chen, Georgia Z.

    2012-01-01

    Abstract. Hyperspectral imaging (HSI) is an emerging modality for various medical applications. Its spectroscopic data might be able to be used to noninvasively detect cancer. Quantitative analysis is often necessary in order to differentiate healthy from diseased tissue. We propose the use of an advanced image processing and classification method in order to analyze hyperspectral image data for prostate cancer detection. The spectral signatures were extracted and evaluated in both cancerous and normal tissue. Least squares support vector machines were developed and evaluated for classifying hyperspectral data in order to enhance the detection of cancer tissue. This method was used to detect prostate cancer in tumor-bearing mice and on pathology slides. Spatially resolved images were created to highlight the differences of the reflectance properties of cancer versus those of normal tissue. Preliminary results with 11 mice showed that the sensitivity and specificity of the hyperspectral image classification method are 92.8% to 2.0% and 96.9% to 1.3%, respectively. Therefore, this imaging method may be able to help physicians to dissect malignant regions with a safe margin and to evaluate the tumor bed after resection. This pilot study may lead to advances in the optical diagnosis of prostate cancer using HSI technology. PMID:22894488

  13. Performance analysis of image fusion methods in transform domain

    NASA Astrophysics Data System (ADS)

    Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram

    2013-05-01

    Image fusion involves merging two or more images in such a way as to retain the most desirable characteristics of each. There are various image fusion methods and they can be classified into three main categories: i) Spatial domain, ii) Transform domain, and iii) Statistical domain. We focus on the transform domain in this paper as spatial domain methods are primitive and statistical domain methods suffer from a significant increase of computational complexity. In the field of image fusion, performance analysis is important since the evaluation result gives valuable information which can be utilized in various applications, such as military, medical imaging, remote sensing, and so on. In this paper, we analyze and compare the performance of fusion methods based on four different transforms: i) wavelet transform, ii) curvelet transform, iii) contourlet transform and iv) nonsubsampled contourlet transform. Fusion framework and scheme are explained in detail, and two different sets of images are used in our experiments. Furthermore, various performance evaluation metrics are adopted to quantitatively analyze the fusion results. The comparison results show that the nonsubsampled contourlet transform method performs better than the other three methods. During the experiments, we also found out that the decomposition level of 3 showed the best fusion performance, and decomposition levels beyond level-3 did not significantly affect the fusion results.

  14. Hyperspectral imaging and quantitative analysis for prostate cancer detection

    NASA Astrophysics Data System (ADS)

    Akbari, Hamed; Halig, Luma V.; Schuster, David M.; Osunkoya, Adeboye; Master, Viraj; Nieh, Peter T.; Chen, Georgia Z.; Fei, Baowei

    2012-07-01

    Hyperspectral imaging (HSI) is an emerging modality for various medical applications. Its spectroscopic data might be able to be used to noninvasively detect cancer. Quantitative analysis is often necessary in order to differentiate healthy from diseased tissue. We propose the use of an advanced image processing and classification method in order to analyze hyperspectral image data for prostate cancer detection. The spectral signatures were extracted and evaluated in both cancerous and normal tissue. Least squares support vector machines were developed and evaluated for classifying hyperspectral data in order to enhance the detection of cancer tissue. This method was used to detect prostate cancer in tumor-bearing mice and on pathology slides. Spatially resolved images were created to highlight the differences of the reflectance properties of cancer versus those of normal tissue. Preliminary results with 11 mice showed that the sensitivity and specificity of the hyperspectral image classification method are 92.8% to 2.0% and 96.9% to 1.3%, respectively. Therefore, this imaging method may be able to help physicians to dissect malignant regions with a safe margin and to evaluate the tumor bed after resection. This pilot study may lead to advances in the optical diagnosis of prostate cancer using HSI technology.

  15. Mars Exploration Rover's image analysis: Evidence of Microbialites on Mars.

    NASA Astrophysics Data System (ADS)

    Bianciardi, Giorgio; Rizzo, Vincenzo; Cantasano, Nicola

    2015-04-01

    The Mars Exploration Rovers, Opportunity and Spirit, investigated Martian plains, where sedimentary rocks are present. The Mars Exploration Rover's Athena morphological investigation showed microstructures organized in intertwined filaments of microspherules: a texture we have also found on samples of terrestrial (biogenic) stromatolites and other microbialites. We performed a quantitative image analysis to compare images of microbialites with the images photographed by the Rovers (corresponding, approximately, to 25,000/25,000 microstructures, Earth/Mars). Contours were extracted and morphometric indexes were obtained: geometric and algorithmic complexities, entropy, tortuosity, minimum and maximum diameters. Terrestrial and Martian textures present a multifractal aspect. Mean values and confidence intervals from the Martian images overlapped perfectly with those from the terrestrial samples. The probability of this occurring by chance is 1/2^8, less than p<0.004. Terrestrial abiogenic pseudostromatolites showed a simple fractal structure and different morphometric values from those of the terrestrial biogenic stromatolite images or Martian images with a less ordered texture (p<0.001). Our work shows the presumptive evidence of microbialites in the Martian outcroppings: the presence of unicellular life widespread on the ancient Mars.

  16. Hyperspectral imaging and quantitative analysis for prostate cancer detection.

    PubMed

    Akbari, Hamed; Halig, Luma V; Schuster, David M; Osunkoya, Adeboye; Master, Viraj; Nieh, Peter T; Chen, Georgia Z; Fei, Baowei

    2012-07-01

    Hyperspectral imaging (HSI) is an emerging modality for various medical applications. Its spectroscopic data might be able to be used to noninvasively detect cancer. Quantitative analysis is often necessary in order to differentiate healthy from diseased tissue. We propose the use of an advanced image processing and classification method in order to analyze hyperspectral image data for prostate cancer detection. The spectral signatures were extracted and evaluated in both cancerous and normal tissue. Least squares support vector machines were developed and evaluated for classifying hyperspectral data in order to enhance the detection of cancer tissue. This method was used to detect prostate cancer in tumor-bearing mice and on pathology slides. Spatially resolved images were created to highlight the differences of the reflectance properties of cancer versus those of normal tissue. Preliminary results with 11 mice showed that the sensitivity and specificity of the hyperspectral image classification method are 92.8% to 2.0% and 96.9% to 1.3%, respectively. Therefore, this imaging method may be able to help physicians to dissect malignant regions with a safe margin and to evaluate the tumor bed after resection. This pilot study may lead to advances in the optical diagnosis of prostate cancer using HSI technology. PMID:22894488

  17. Sensitivity analysis of near-infrared functional lymphatic imaging

    NASA Astrophysics Data System (ADS)

    Weiler, Michael; Kassis, Timothy; Dixon, J. Brandon

    2012-06-01

    Near-infrared imaging of lymphatic drainage of injected indocyanine green (ICG) has emerged as a new technology for clinical imaging of lymphatic architecture and quantification of vessel function, yet the imaging capabilities of this approach have yet to be quantitatively characterized. We seek to quantify its capabilities as a diagnostic tool for lymphatic disease. Imaging is performed in a tissue phantom for sensitivity analysis and in hairless rats for in vivo testing. To demonstrate the efficacy of this imaging approach to quantifying immediate functional changes in lymphatics, we investigate the effects of a topically applied nitric oxide (NO) donor glyceryl trinitrate ointment. Premixing ICG with albumin induces greater fluorescence intensity, with the ideal concentration being 150 μg/mL ICG and 60 g/L albumin. ICG fluorescence can be detected at a concentration of 150 μg/mL as deep as 6 mm with our system, but spatial resolution deteriorates below 3 mm, skewing measurements of vessel geometry. NO treatment slows lymphatic transport, which is reflected in increased transport time, reduced packet frequency, reduced packet velocity, and reduced effective contraction length. NIR imaging may be an alternative to invasive procedures measuring lymphatic function in vivo in real time.

  18. Quantitative image analysis in sonograms of the thyroid gland

    NASA Astrophysics Data System (ADS)

    Catherine, Skouroliakou; Maria, Lyra; Aristides, Antoniou; Lambros, Vlahos

    2006-12-01

    High-resolution, real-time ultrasound is a routine examination for assessing the disorders of the thyroid gland. However, the current diagnosis practice is based mainly on qualitative evaluation of the resulting sonograms, therefore depending on the physician's experience. Computerized texture analysis is widely employed in sonographic images of various organs (liver, breast), and it has been proven to increase the sensitivity of diagnosis by providing a better tissue characterization. The present study attempts to characterize thyroid tissue by automatic texture analysis. The texture features that are calculated are based on co-occurrence matrices as they have been proposed by Haralick. The sample consists of 40 patients. For each patient two sonographic images (one for each lobe) are recorded in DICOM format. The lobe is manually delineated in each sonogram, and the co-occurrence matrices for 52 separation vectors are calculated. The texture features extracted from each one of these matrices are: contrast, correlation, energy and homogeneity. Primary component analysis is used to select the optimal set of features. The statistical analysis resulted in the extraction of 21 optimal descriptors. The optimal descriptors are all co-occurrence parameters as the first-order statistics did not prove to be representative of the images characteristics. The bigger number of components depends mainly on correlation for very close or very far distances. The results indicate that quantitative analysis of thyroid sonograms can provide an objective characterization of thyroid tissue.

  19. Determination of gunshot residues with image analysis: an experimental study.

    PubMed

    Tuğcu, Harun; Yorulmaz, Coşkun; Bayraktaroğlu, Görgün; Uner, Hüseyin Bülent; Karslioğlu, Yildirim; Koç, Sermet; Ulukan, Mustafa Ozer; Celasun, Bülent

    2005-09-01

    In firearm injuries, assessment of the firing range and determination of entrance and exit wounds are important. For this reason, evaluation of the amount and distribution of gunshot residues (GSRs) is necessary. Several methods and techniques for GSR analysis have been developed. Although these methods are relatively sensitive and specific, they may require expensive dedicated equipment. Therefore, a simple, easily applicable, more convenient method is needed. A total of 40 experimental shots were made to calf skin from distances of 0, 2.5, 5, 10, 20, 30, 45, and 60 cm. Eighty samples were taken from the right and left sides of the wounds, and Alizarin Red S dye staining was performed. The amounts of GSR particles were measured with image analysis. GSRs were detected in all shots. The mean size of the distribution area of barium and lead elements around the wound had a significant negative correlation with increasing shooting distance (r = -0.97, p < 0.001). As the distance increased, the amount of GSR decreased, and this decrease rate was nonlinear. Variance analysis suggested significant differences between data groups depending on range (p < 0.001). The image analysis method may solve some of the standardization problems for evaluation of GSRs. GSR detection with the image analysis method does not require experienced personnel and may be a suitable method for scientific studies and for routine purposes. PMID:16261988

  20. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  1. Componential distribution analysis of food using near infrared ray image

    NASA Astrophysics Data System (ADS)

    Yamauchi, Hiroki; Kato, Kunihito; Yamamoto, Kazuhiko; Ogawa, Noriko; Ohba, Kimie

    2008-11-01

    The components of the food related to the "deliciousness" are usually evaluated by componential analysis. The component content and type of components in the food are determined by this analysis. However, componential analysis is not able to analyze measurements in detail, and the measurement is time consuming. We propose a method to measure the two-dimensional distribution of the component in food using a near infrared ray (IR) image. The advantage of our method is to be able to visualize the invisible components. Many components in food have characteristics such as absorption and reflection of light in the IR range. The component content is measured using subtraction between two wavelengths of near IR light. In this paper, we describe a method to measure the component of food using near IR image processing, and we show an application to visualize the saccharose in the pumpkin.

  2. Wavelet Analysis of Satellite Images for Coastal Watch

    NASA Technical Reports Server (NTRS)

    Liu, Antony K.; Peng, Chich Y.; Chang, Steve Y.-S.

    1997-01-01

    The two-dimensional wavelet transform is a very efficient bandpass filter, which can be used to separate various scales of processes and show their relative phase/location. In this paper, algorithms and techniques for automated detection and tracking of mesoscale features from satellite imagery employing wavelet analysis are developed. The wavelet transform has been applied to satellite images, such as those from synthetic aperture radar (SAR), advanced very-high-resolution radiometer (AVHRR), and coastal zone color scanner (CZCS) for feature extraction. The evolution of mesoscale features such as oil slicks, fronts, eddies, and ship wakes can be tracked by the wavelet analysis using satellite data from repeating paths. Several examples of the wavelet analysis applied to various satellite Images demonstrate the feasibility of this technique for coastal monitoring.

  3. Automatic analysis of microscopic images of red blood cell aggregates

    NASA Astrophysics Data System (ADS)

    Menichini, Pablo A.; Larese, Mónica G.; Riquelme, Bibiana D.

    2015-06-01

    Red blood cell aggregation is one of the most important factors in blood viscosity at stasis or at very low rates of flow. The basic structure of aggregates is a linear array of cell commonly termed as rouleaux. Enhanced or abnormal aggregation is seen in clinical conditions, such as diabetes and hypertension, producing alterations in the microcirculation, some of which can be analyzed through the characterization of aggregated cells. Frequently, image processing and analysis for the characterization of RBC aggregation were done manually or semi-automatically using interactive tools. We propose a system that processes images of RBC aggregation and automatically obtains the characterization and quantification of the different types of RBC aggregates. Present technique could be interesting to perform the adaptation as a routine used in hemorheological and Clinical Biochemistry Laboratories because this automatic method is rapid, efficient and economical, and at the same time independent of the user performing the analysis (repeatability of the analysis).

  4. Automated Dermoscopy Image Analysis of Pigmented Skin Lesions

    PubMed Central

    Baldi, Alfonso; Quartulli, Marco; Murace, Raffaele; Dragonetti, Emanuele; Manganaro, Mario; Guerra, Oscar; Bizzi, Stefano

    2010-01-01

    Dermoscopy (dermatoscopy, epiluminescence microscopy) is a non-invasive diagnostic technique for the in vivo observation of pigmented skin lesions (PSLs), allowing a better visualization of surface and subsurface structures (from the epidermis to the papillary dermis). This diagnostic tool permits the recognition of morphologic structures not visible by the naked eye, thus opening a new dimension in the analysis of the clinical morphologic features of PSLs. In order to reduce the learning-curve of non-expert clinicians and to mitigate problems inherent in the reliability and reproducibility of the diagnostic criteria used in pattern analysis, several indicative methods based on diagnostic algorithms have been introduced in the last few years. Recently, numerous systems designed to provide computer-aided analysis of digital images obtained by dermoscopy have been reported in the literature. The goal of this article is to review these systems, focusing on the most recent approaches based on content-based image retrieval systems (CBIR). PMID:24281070

  5. A unified noise analysis for iterative image estimation

    SciTech Connect

    Qi, Jinyi

    2003-07-03

    Iterative image estimation methods have been widely used in emission tomography. Accurate estimate of the uncertainty of the reconstructed images is essential for quantitative applications. While theoretical approach has been developed to analyze the noise propagation from iteration to iteration, the current results are limited to only a few iterative algorithms that have an explicit multiplicative update equation. This paper presents a theoretical noise analysis that is applicable to a wide range of preconditioned gradient type algorithms. One advantage is that proposed method does not require an explicit expression of the preconditioner and hence it is applicable to some algorithms that involve line searches. By deriving fixed point expression from the iteration based results, we show that the iteration based noise analysis is consistent with the xed point based analysis. Examples in emission tomography and transmission tomography are shown.

  6. Semiautomated analysis of embryoscope images: Using localized variance of image intensity to detect embryo developmental stages.

    PubMed

    Mölder, Anna; Drury, Sarah; Costen, Nicholas; Hartshorne, Geraldine M; Czanner, Silvester

    2015-02-01

    Embryo selection in in vitro fertilization (IVF) treatment has traditionally been done manually using microscopy at intermittent time points during embryo development. Novel technique has made it possible to monitor embryos using time lapse for long periods of time and together with the reduced cost of data storage, this has opened the door to long-term time-lapse monitoring, and large amounts of image material is now routinely gathered. However, the analysis is still to a large extent performed manually, and images are mostly used as qualitative reference. To make full use of the increased amount of microscopic image material, (semi)automated computer-aided tools are needed. An additional benefit of automation is the establishment of standardization tools for embryo selection and transfer, making decisions more transparent and less subjective. Another is the possibility to gather and analyze data in a high-throughput manner, gathering data from multiple clinics and increasing our knowledge of early human embryo development. In this study, the extraction of data to automatically select and track spatio-temporal events and features from sets of embryo images has been achieved using localized variance based on the distribution of image grey scale levels. A retrospective cohort study was performed using time-lapse imaging data derived from 39 human embryos from seven couples, covering the time from fertilization up to 6.3 days. The profile of localized variance has been used to characterize syngamy, mitotic division and stages of cleavage, compaction, and blastocoel formation. Prior to analysis, focal plane and embryo location were automatically detected, limiting precomputational user interaction to a calibration step and usable for automatic detection of region of interest (ROI) regardless of the method of analysis. The results were validated against the opinion of clinical experts. © 2015 International Society for Advancement of Cytometry. PMID:25614363

  7. Measuring track densities in lunar grains using image analysis

    NASA Technical Reports Server (NTRS)

    Blanford, G. E.; Mckay, D. S.; Bernhard, R. P.; Schulz, C. K.

    1994-01-01

    We have used digitized scanning electron micrographs and computer image analysis programs to measure track densities in lunar soil grains. Tracks were formed by highly ionizing solar energetic particles and cosmic rays. Back-scattered electron images produced suitable high contrast images for analysis. We used computer counting and measurement of area to obtain track densities. We found an excellent correlation with manual measurements for track densities below 1x10(exp 8) cm(exp -2). For track densities between 1x10(exp 8) to 1x10(exp 9) cm(exp -2) we found that a regression formula using the percentage area covered by tracks gave good agreement with manual measurements. Measurement of tract densities in lunar samples has been a very rewarding technique for measuring exposure ages and soil maturation processes. We have shown that we can reliably measure track densities in lunar grains using image analysis techniques. Automating track counting may allow application of this technique to important problems in regolith dynamics including the ratio of radiation exposure to reworking in various surface and core samples and in regolith breccias.

  8. Learning approach for multicontent analysis of compound images

    NASA Astrophysics Data System (ADS)

    Besnehard, Quentin; Marchessoux, Cedric; Kimpe, Tom; Spalla, Guillaume; Joubel, Arnaud; Boudet, François

    2009-01-01

    In the context of the European Cantata project (ITEA project, 2006-2009), within Barco, a complete Multi-Content Analysis framework was developed for detection and analysis of compound images. The framework consists of: a dataset, a Multi-Content Analysis (MCA) algorithm based on learning approaches, a Ground Truth, an evaluation module based on metrics and a presentation module. The aim of the MCA methodology presented here is to classify image content of computer screenshots into different categories such as: text; Graphical User Interface; Medical images and other complex images. The AdaBoost meta-algorithm was chosen, implemented and optimized for the classification method as it fitted the constraints (real-time and precision). A large dataset separated in training and testing subsets and their ground truth (with ViPER metadata format) was both collected and generated for the four different categories. The outcome of the MCA is a cascade of strong classifiers trained and tested on the different subsets. The obtained framework and its optimization (binary search, pre-computing of the features, pre-sorting) allow to re-train the classifiers as much as needed. The preliminary results are quite encouraging with a low false positive rate and close true positive rate in comparison with expectations. The re-injection of false negative examples from new testing subsets in the training phase resulted in better performances of the MCA.

  9. Automated pollen identification using microscopic imaging and texture analysis.

    PubMed

    Marcos, J Víctor; Nava, Rodrigo; Cristóbal, Gabriel; Redondo, Rafael; Escalante-Ramírez, Boris; Bueno, Gloria; Déniz, Óscar; González-Porto, Amelia; Pardo, Cristina; Chung, François; Rodríguez, Tomás

    2015-01-01

    Pollen identification is required in different scenarios such as prevention of allergic reactions, climate analysis or apiculture. However, it is a time-consuming task since experts are required to recognize each pollen grain through the microscope. In this study, we performed an exhaustive assessment on the utility of texture analysis for automated characterisation of pollen samples. A database composed of 1800 brightfield microscopy images of pollen grains from 15 different taxa was used for this purpose. A pattern recognition-based methodology was adopted to perform pollen classification. Four different methods were evaluated for texture feature extraction from the pollen image: Haralick's gray-level co-occurrence matrices (GLCM), log-Gabor filters (LGF), local binary patterns (LBP) and discrete Tchebichef moments (DTM). Fisher's discriminant analysis and k-nearest neighbour were subsequently applied to perform dimensionality reduction and multivariate classification, respectively. Our results reveal that LGF and DTM, which are based on the spectral properties of the image, outperformed GLCM and LBP in the proposed classification problem. Furthermore, we found that the combination of all the texture features resulted in the highest performance, yielding an accuracy of 95%. Therefore, thorough texture characterisation could be considered in further implementations of automatic pollen recognition systems based on image processing techniques. PMID:25259684

  10. Applications of wavelets in morphometric analysis of medical images

    NASA Astrophysics Data System (ADS)

    Davatzikos, Christos; Tao, Xiaodong; Shen, Dinggang

    2003-11-01

    Morphometric analysis of medical images is playing an increasingly important role in understanding brain structure and function, as well as in understanding the way in which these change during development, aging and pathology. This paper presents three wavelet-based methods with related applications in morphometric analysis of magnetic resonance (MR) brain images. The first method handles cases where very limited datasets are available for the training of statistical shape models in the deformable segmentation. The method is capable of capturing a larger range of shape variability than the standard active shape models (ASMs) can, by using the elegant spatial-frequency decomposition of the shape contours provided by wavelet transforms. The second method addresses the difficulty of finding correspondences in anatomical images, which is a key step in shape analysis and deformable registration. The detection of anatomical correspondences is completed by using wavelet-based attribute vectors as morphological signatures of voxels. The third method uses wavelets to characterize the morphological measurements obtained from all voxels in a brain image, and the entire set of wavelet coefficients is further used to build a brain classifier. Since the classification scheme operates in a very-high-dimensional space, it can determine subtle population differences with complex spatial patterns. Experimental results are provided to demonstrate the performance of the proposed methods.

  11. Theory, Image Simulation, and Data Analysis of Chemical Release Experiments

    NASA Technical Reports Server (NTRS)

    Wescott, Eugene M.

    1994-01-01

    The final phase of Grant NAG6-1 involved analysis of physics of chemical releases in the upper atmosphere and analysis of data obtained on previous NASA sponsored chemical release rocket experiments. Several lines of investigation of past chemical release experiments and computer simulations have been proceeding in parallel. This report summarizes the work performed and the resulting publications. The following topics are addressed: analysis of the 1987 Greenland rocket experiments; calculation of emission rates for barium, strontium, and calcium; the CRIT 1 and 2 experiments (Collisional Ionization Cross Section experiments); image calibration using background stars; rapid ray motions in ionospheric plasma clouds; and the NOONCUSP rocket experiments.

  12. Reduction and analysis techniques for infrared imaging data

    NASA Technical Reports Server (NTRS)

    Mccaughrean, Mark

    1989-01-01

    Infrared detector arrays are becoming increasingly available to the astronomy community, with a number of array cameras already in use at national observatories, and others under development at many institutions. As the detector technology and imaging instruments grow more sophisticated, more attention is focussed on the business of turning raw data into scientifically significant information. Turning pictures into papers, or equivalently, astronomy into astrophysics, both accurately and efficiently, is discussed. Also discussed are some of the factors that can be considered at each of three major stages; acquisition, reduction, and analysis, concentrating in particular on several of the questions most relevant to the techniques currently applied to near infrared imaging.

  13. Terahertz imaging with missing data analysis for metamaterials characterization

    NASA Astrophysics Data System (ADS)

    Sokolnikov, Andre

    2012-05-01

    Terahertz imaging proves advantageous for metamaterials characterization since the interaction of THz radiation with the metamaterials produces clear patterns of the material. Characteristic "finger prints" of the crystal structure help locating defects, dislocations, contamination, etc. TDS-THz spectroscopy is one of the tools to control metamaterials design and manufacturing. A computational technique is suggested that provides a reliable way of calculation of the metamaterials structure parameters, spotting defects. Based on missing data analysis, the applied signal processing facilitates a better quality image while compensating for partially absent information. Results are provided.

  14. Component pattern analysis of chemicals using multispectral THz imaging system

    NASA Astrophysics Data System (ADS)

    Kawase, Kodo; Ogawa, Yuichi; Watanabe, Yuki

    2004-04-01

    We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.

  15. Uniform color space analysis of LACIE image products

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.

    1979-01-01

    The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.

  16. Identifying severity of electroporation through quantitative image analysis

    NASA Astrophysics Data System (ADS)

    Morshed, Bashir I.; Shams, Maitham; Mussivand, Tofy

    2011-04-01

    Electroporation is the formation of reversible hydrophilic pores in the cell membrane under electric fields. Severity of electroporation is challenging to measure and quantify. An image analysis method is developed, and the initial results with a fabricated microfluidic device are reported. The microfluidic device contains integrated microchannels and coplanar interdigitated electrodes allowing low-voltage operation and low-power consumption. Noninvasive human buccal cell samples were specifically stained, and electroporation was induced. Captured image sequences were analyzed for pixel color ranges to quantify the severity of electroporation. The method can detect even a minor occurrence of electroporation and can perform comparative studies.

  17. Spectral mixture analysis of multispectral thermal infrared images

    NASA Technical Reports Server (NTRS)

    Gillespie, Alan R.

    1992-01-01

    Remote spectral measurements of light reflected or emitted from terrestrial scenes is commonly integrated over areas sufficiently large that the surface comprises more than one component. Techniques have been developed to analyze multispectral or imaging spectrometer data in terms of a wide range of mixtures of a limited number of components. Spectral mixture analysis has been used primarily for visible and near-infrared images, but it may also be applied to thermal infrared data. Two approaches are reviewed: binary mixing and a more general treatment for isothermal mixtures of a greater number of components.

  18. Towards noncontact skin melanoma selection by multispectral imaging analysis

    NASA Astrophysics Data System (ADS)

    Kuzmina, Ilona; Diebele, Ilze; Jakovels, Dainis; Spigulis, Janis; Valeine, Lauma; Kapostinsh, Janis; Berzina, Anna

    2011-06-01

    A clinical trial comprising 334 pigmented and vascular lesions has been performed in three Riga clinics by means of multispectral imaging analysis. The imaging system Nuance 2.4 (CRi) and self-developed software for mapping of the main skin chromophores were used. Specific features were observed and analyzed for malignant skin melanomas: notably higher absorbance (especially as the difference of optical density relative to the healthy skin), uneven chromophore distribution over the lesion area, and the possibility to select the ``melanoma areas'' in the correlation graphs of chromophores. The obtained results indicate clinical potential of this technology for noncontact selection of melanoma from other pigmented and vascular skin lesions.

  19. Tracking of vessel diameter fluctuations using digital image analysis

    NASA Astrophysics Data System (ADS)

    Aggarwal, Shanti J.; Yip, C.-Y.; Diller, Kenneth R.; Bovik, Alan C.

    1990-05-01

    An automatic digital image processing technique for vasomotion analysis in peripheral microcirculation at multiple sites simultaneously and in real time, is presented. The algorithm utilizes either fluorescent or bright field microimages of the vasculature as input. The video images are digitized and analyzed on-line by an IBM RT PC. Using digital filtering and edge detection, the technique allows simultaneous diameter measurement at more than one site. The sampling frequency is higher than 5 Hz when only one site is tracked. The performance of the algorithm is tested in the hamster cutaneous microcirculation.

  20. Data analysis tools for imaging infrared technology within the ImageJ environment

    NASA Astrophysics Data System (ADS)

    Rogers, Ryan K.; Edwards, W. Derrik; Waddle, Caleb E.; Dobbins, Christopher L.; Wood, Sam B.

    2013-06-01

    For over 30 years, the U.S. Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC) has specialized in characterizing the performance of infrared (IR) imaging systems in the laboratory and field. In the late 90's, AMRDEC developed the Automated IR Sensor Test Facility (AISTF) which allowed efficient deployment testing of Unmanned Aerial Systems (UAS) payloads. More recently, ImageJ has been used predominately as the image processing environment of choice for analysis of laboratory, field, and simulated data. The strengths of ImageJ are that it is maintained by the U.S. National Institute of Health, it exists in the public domain, and it functions on all major operating systems. Three new tools or "plugins" have been developed at AMRDEC to enhance the accuracy and efficiency of analysis. First, a Noise Equivalent Temperature Difference (NETD) plugin was written to process Signal Transfer Function (SiTF) and 3D noise data. Another plugin was produced that measures the Modulation Transfer Function (MTF) given either an edge or slit target. Lastly, a plugin was developed to measure Focal Plane Array (FPA) defects, classify and bin the customizable defects, and report statistics. This paper will document the capabilities and practical applications of these tools as well as profile their advantages over previous methods of analysis.

  1. Multiscale Morphological Filtering for Analysis of Noisy and Complex Images

    NASA Technical Reports Server (NTRS)

    Kher, A.; Mitra, S.

    1993-01-01

    Images acquired with passive sensing techniques suffer from illumination variations and poor local contrasts that create major difficulties in interpretation and identification tasks. On the other hand, images acquired with active sensing techniques based on monochromatic illumination are degraded with speckle noise. Mathematical morphology offers elegant techniques to handle a wide range of image degradation problems. Unlike linear filters, morphological filters do not blur the edges and hence maintain higher image resolution. Their rich mathematical framework facilitates the design and analysis of these filters as well as their hardware implementation. Morphological filters are easier to implement and are more cost effective and efficient than several conventional linear filters. Morphological filters to remove speckle noise while maintaining high resolution and preserving thin image regions that are particularly vulnerable to speckle noise were developed and applied to SAR imagery. These filters used combination of linear (one-dimensional) structuring elements in different (typically four) orientations. Although this approach preserves more details than the simple morphological filters using two-dimensional structuring elements, the limited orientations of one-dimensional elements approximate the fine details of the region boundaries. A more robust filter designed recently overcomes the limitation of the fixed orientations. This filter uses a combination of concave and convex structuring elements. Morphological operators are also useful in extracting features from visible and infrared imagery. A multiresolution image pyramid obtained with successive filtering and a subsampling process aids in the removal of the illumination variations and enhances local contrasts. A morphology-based interpolation scheme was also introduced to reduce intensity discontinuities created in any morphological filtering task. The generality of morphological filtering techniques in

  2. Spectral analysis of mammographic images using a multitaper method

    SciTech Connect

    Wu Gang; Mainprize, James G.; Yaffe, Martin J.

    2012-02-15

    Purpose: Power spectral analysis in radiographic images is conventionally performed using a windowed overlapping averaging periodogram. This study describes an alternative approach using a multitaper technique and compares its performance with that of the standard method. This tool will be valuable in power spectrum estimation of images, whose content deviates significantly from uniform white noise. The performance of the multitaper approach will be evaluated in terms of spectral stability, variance reduction, bias, and frequency precision. The ultimate goal is the development of a useful tool for image quality assurance. Methods: A multitaper approach uses successive data windows of increasing order. This mitigates spectral leakage allowing one to calculate a reduced-variance power spectrum. The multitaper approach will be compared with the conventional power spectrum method in several typical situations, including the noise power spectra (NPS) measurements of simulated projection images of a uniform phantom, NPS measurement of real detector images of a uniform phantom for two clinical digital mammography systems, and the estimation of the anatomic noise in mammographic images (simulated images and clinical mammograms). Results: Examination of spectrum variance versus frequency resolution and bias indicates that the multitaper approach is superior to the conventional single taper methods in the prevention of spectrum leakage and variance reduction. More than four times finer frequency precision can be achieved with equivalent or less variance and bias. Conclusions: Without any shortening of the image data length, the bias is smaller and the frequency resolution is higher with the multitaper method, and the need to compromise in the choice of regions of interest size to balance between the reduction of variance and the loss of frequency resolution is largely eliminated.

  3. Mammographic quantitative image analysis and biologic image composition for breast lesion characterization and classification

    SciTech Connect

    Drukker, Karen Giger, Maryellen L.; Li, Hui; Duewer, Fred; Malkov, Serghei; Joe, Bonnie; Kerlikowske, Karla; Shepherd, John A.; Flowers, Chris I.; Drukteinis, Jennifer S.

    2014-03-15

    Purpose: To investigate whether biologic image composition of mammographic lesions can improve upon existing mammographic quantitative image analysis (QIA) in estimating the probability of malignancy. Methods: The study population consisted of 45 breast lesions imaged with dual-energy mammography prior to breast biopsy with final diagnosis resulting in 10 invasive ductal carcinomas, 5 ductal carcinomain situ, 11 fibroadenomas, and 19 other benign diagnoses. Analysis was threefold: (1) The raw low-energy mammographic images were analyzed with an established in-house QIA method, “QIA alone,” (2) the three-compartment breast (3CB) composition measure—derived from the dual-energy mammography—of water, lipid, and protein thickness were assessed, “3CB alone”, and (3) information from QIA and 3CB was combined, “QIA + 3CB.” Analysis was initiated from radiologist-indicated lesion centers and was otherwise fully automated. Steps of the QIA and 3CB methods were lesion segmentation, characterization, and subsequent classification for malignancy in leave-one-case-out cross-validation. Performance assessment included box plots, Bland–Altman plots, and Receiver Operating Characteristic (ROC) analysis. Results: The area under the ROC curve (AUC) for distinguishing between benign and malignant lesions (invasive and DCIS) was 0.81 (standard error 0.07) for the “QIA alone” method, 0.72 (0.07) for “3CB alone” method, and 0.86 (0.04) for “QIA+3CB” combined. The difference in AUC was 0.043 between “QIA + 3CB” and “QIA alone” but failed to reach statistical significance (95% confidence interval [–0.17 to + 0.26]). Conclusions: In this pilot study analyzing the new 3CB imaging modality, knowledge of the composition of breast lesions and their periphery appeared additive in combination with existing mammographic QIA methods for the distinction between different benign and malignant lesion types.

  4. Collaborative analysis of multi-gigapixel imaging data using Cytomine

    PubMed Central

    Marée, Raphaël; Rollus, Loïc; Stévens, Benjamin; Hoyoux, Renaud; Louppe, Gilles; Vandaele, Rémy; Begon, Jean-Michel; Kainz, Philipp; Geurts, Pierre; Wehenkel, Louis

    2016-01-01

    Motivation: Collaborative analysis of massive imaging datasets is essential to enable scientific discoveries. Results: We developed Cytomine to foster active and distributed collaboration of multidisciplinary teams for large-scale image-based studies. It uses web development methodologies and machine learning in order to readily organize, explore, share and analyze (semantically and quantitatively) multi-gigapixel imaging data over the internet. We illustrate how it has been used in several biomedical applications. Availability and implementation: Cytomine (http://www.cytomine.be/) is freely available under an open-source license from http://github.com/cytomine/. A documentation wiki (http://doc.cytomine.be) and a demo server (http://demo.cytomine.be) are also available. Contact: info@cytomine.be Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26755625

  5. Removing Milky Way from airglow images using principal component analysis

    NASA Astrophysics Data System (ADS)

    Li, Zhenhua; Liu, Alan; Sivjee, Gulamabas G.

    2014-04-01

    Airglow imaging is an effective way to obtain atmospheric gravity wave information in the airglow layers in the upper mesosphere and the lower thermosphere. Airglow images are often contaminated by the Milky Way emission. To extract gravity wave parameters correctly, the Milky Way must be removed. The paper demonstrates that principal component analysis (PCA) can effectively represent the dominant variation patterns of the intensity of airglow images that are associated with the slow moving Milky Way features. Subtracting this PCA reconstructed field reveals gravity waves that are otherwise overwhelmed by the strong spurious waves associated with the Milky Way. Numerical experiments show that nonstationary gravity waves with typical wave amplitudes and persistences are not affected by the PCA removal because the variances contributed by each wave event are much smaller than the ones in the principal components.

  6. Imaging hydrated microbial extracellular polymers: Comparative analysis by electron microscopy

    SciTech Connect

    Dohnalkova, A.C.; Marshall, M. J.; Arey, B. W.; Williams, K. H.; Buck, E. C.; Fredrickson, J. K.

    2011-01-01

    Microbe-mineral and -metal interactions represent a major intersection between the biosphere and geosphere but require high-resolution imaging and analytical tools for investigating microscale associations. Electron microscopy has been used extensively for geomicrobial investigations and although used bona fide, the traditional methods of sample preparation do not preserve the native morphology of microbiological components, especially extracellular polymers. Herein, we present a direct comparative analysis of microbial interactions using conventional electron microscopy approaches of imaging at room temperature and a suite of cryogenic electron microscopy methods providing imaging in the close-to-natural hydrated state. In situ, we observed an irreversible transformation of the hydrated bacterial extracellular polymers during the traditional dehydration-based sample preparation that resulted in their collapse into filamentous structures. Dehydration-induced polymer collapse can lead to inaccurate spatial relationships and hence could subsequently affect conclusions regarding nature of interactions between microbial extracellular polymers and their environment.

  7. Imaging Hydrated Microbial Extracellular Polymers: Comparative Analysis by Electron Microscopy

    SciTech Connect

    Dohnalkova, Alice; Marshall, Matthew J.; Arey, Bruce W.; Williams, Kenneth H.; Buck, Edgar C.; Fredrickson, Jim K.

    2011-02-01

    Microbe-mineral and -metal interactions represent a major intersection between the biosphere and geosphere but require high-resolution imaging and analytical tools for investigating microscale associations. Electron microscopy has been used extensively for geomicrobial investigations and although used bona fide, the traditional methods of sample preparation do not preserve the native morphology of microbiological components, especially extracellular polymers. Herein, we present a direct comparative analysis of microbial interactions using conventional electron microscopy approaches of imaging at room temperature and a suite of cryo-electron microscopy methods providing imaging in the close-to-natural hydrated state. In situ, we observed an irreversible transformation of bacterial extracellular polymers during the traditional dehydration-based sample preparation that resulted in the collapse of hydrated gel-like EPS into filamentous structures. Dehydration-induced polymer collapse can lead to inaccurate spatial relationships and hence could subsequently affect conclusions regarding nature of interactions between microbial extracellular polymers and their environment.

  8. Cluster Method Analysis of K. S. C. Image

    NASA Technical Reports Server (NTRS)

    Rodriguez, Joe, Jr.; Desai, M.

    1997-01-01

    Information obtained from satellite-based systems has moved to the forefront as a method in the identification of many land cover types. Identification of different land features through remote sensing is an effective tool for regional and global assessment of geometric characteristics. Classification data acquired from remote sensing images have a wide variety of applications. In particular, analysis of remote sensing images have special applications in the classification of various types of vegetation. Results obtained from classification studies of a particular area or region serve towards a greater understanding of what parameters (ecological, temporal, etc.) affect the region being analyzed. In this paper, we make a distinction between both types of classification approaches although, focus is given to the unsupervised classification method using 1987 Thematic Mapped (TM) images of Kennedy Space Center.

  9. Leaf image segmentation method based on multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Jin-Wei; Shi, Wen; Liao, Gui-Ping

    2013-12-01

    To identify singular regions of crop leaf affected by diseases, based on multifractal detrended fluctuation analysis (MF-DFA), an image segmentation method is proposed. In the proposed method, first, we defend a new texture descriptor: local generalized Hurst exponent, recorded as LHq based on MF-DFA. And then, box-counting dimension f(LHq) is calculated for sub-images constituted by the LHq of some pixels, which come from a specific region. Consequently, series of f(LHq) of the different regions can be obtained. Finally, the singular regions are segmented according to the corresponding f(LHq). Six kinds of corn diseases leaf's images are tested in our experiments. Both the proposed method and other two segmentation methods—multifractal spectrum based and fuzzy C-means clustering have been compared in the experiments. The comparison results demonstrate that the proposed method can recognize the lesion regions more effectively and provide more robust segmentations.

  10. Microcomputer-based digital image analysis system for quantitative autoradiography

    SciTech Connect

    Hoffman, T.J.; Volkert, W.A.; Holmes, R.A.

    1988-01-01

    A computerized image processing system utilizing an IBM-XT personal microcomputer with the capability of performing quantitative cerebral autoradiography is described. All of the system components are standard computer and optical hardware that can be easily assembled. The system has 512 horizontal by 512 vertical axis resolution with 8 bits per pixel (256 gray levels). Unlike other dedicated image processing systems, the IBM-XT permits the assembly of an efficient, low-cost image analysis system without sacrificing other capabilities of the IBM personal computer. The application of this system in both qualitative and quantitative autoradiography has been the principal factor in developing a new radiopharmaceutical to measure regional cerebral blood flow.

  11. Digital analysis of the fringe pattern images from biomedical objects

    NASA Astrophysics Data System (ADS)

    Jaronski, Jaroslaw W.; Podbielska, Halina; Kasprzak, Henryk T.

    1995-03-01

    Recent developments in optical, optoelectronic, and digital electronic imaging and metrology are creating opportunities for a new type of diagnostics methods and systems. Some of these techniques, established already in the field of technical and industrial non-destructive testing, have increasingly gained importance in biomedical research and may enter the clinical scene, as well. Even the laboratory investigations can have strong impact for further developments in this field. However, in experimental medicine the quantitative analysis of experimental data is sometimes required. When applying different interferometric methods, the obtained results are in the form of fringe pattern images. In this paper some of these methods, including holographic interferometry, laser interferometry and moire techniques are described and illustrated by experimental results. For acquisition and evaluation of the fringe pattern images, the Bioscan Optimas package from Bioscan, Incorporated of Edmonds, Wash., running under Microsoft Windows was used.

  12. Image analysis technique applied to lock-exchange gravity currents

    NASA Astrophysics Data System (ADS)

    Nogueira, Helena I. S.; Adduce, Claudia; Alves, Elsa; Franca, Mário J.

    2013-04-01

    An image analysis technique is used to estimate the two-dimensional instantaneous density field of unsteady gravity currents produced by full-depth lock-release of saline water. An experiment reproducing a gravity current was performed in a 3.0 m long, 0.20 m wide and 0.30 m deep Perspex flume with horizontal smooth bed and recorded with a 25 Hz CCD video camera under controlled light conditions. Using dye concentration as a tracer, a calibration procedure was established for each pixel in the image relating the amount of dye uniformly distributed in the tank and the greyscale values in the corresponding images. The results are evaluated and corrected by applying the mass conservation principle within the experimental tank. The procedure is a simple way to assess the time-varying density distribution within the gravity current, allowing the investigation of gravity current dynamics and mixing processes.

  13. Determining Percentage of Broken Rice by Using Image Analysis

    NASA Astrophysics Data System (ADS)

    Aghayeghazvini, H.; Afzal, A.; Heidarisoltanabadi, M.; Malek, S.; Mollabashi, L.

    During the rice milling process, a huller system is used to remove rough rice hull, then abrasive or frictional mill product whitened rice. In such machines adjustment and control, operation is important, because breakage percentage of rice is one main factor in determination of quality. Today's image processing techniques has become an increasingly popular and cost-effective method. The estimation of rice quality by image processing as a tool to determine broken percentage is the aim of this study. The images were acquired by a digital camera (1944 x 2592 pixels) from grains spread on corrugated plate, and then analyzed by scion software. Statistical analysis showed a significant correlation (greater than 0.9) between the results obtained from the proposed method and conventional method. This work indicates that the digital processing technique can be used for estimating broken grains.

  14. Determining Percentage of Broken Rice by Using Image Analysis

    NASA Astrophysics Data System (ADS)

    Aghayeghazvini, H.; Afzal, A.; Heidarisoltanabadi, M.; Malek, S.; Mollabashi, L.

    During the rice milling process, a huller system is used to remove rough rice hull, then abrasive or frictional mill product whitened rice. In such machines adjustment and control, operation is important, because breakage percentage of rice is one main factor in determination of quality. Today’s image processing techniques has become an increasingly popular and cost-effective method. The estimation of rice quality by image processing as a tool to determine broken percentage is the aim of this study. The images were acquired by a digital camera (1944 x 2592 pixels) from grains spread on corrugated plate, and then analyzed by scion software. Statistical analysis showed a significant correlation (greater than 0.9) between the results obtained from the proposed method and conventional method. This work indicates that the digital processing technique can be used for estimating broken grains.

  15. Imaging hydrated microbial extracellular polymers: comparative analysis by electron microscopy.

    PubMed

    Dohnalkova, Alice C; Marshall, Matthew J; Arey, Bruce W; Williams, Kenneth H; Buck, Edgar C; Fredrickson, James K

    2011-02-01

    Microbe-mineral and -metal interactions represent a major intersection between the biosphere and geosphere but require high-resolution imaging and analytical tools for investigation of microscale associations. Electron microscopy has been used extensively for geomicrobial investigations, and although used bona fide, the traditional methods of sample preparation do not preserve the native morphology of microbiological components, especially extracellular polymers. Herein, we present a direct comparative analysis of microbial interactions by conventional electron microscopy approaches with imaging at room temperature and a suite of cryogenic electron microscopy methods providing imaging in the close-to-natural hydrated state. In situ, we observed an irreversible transformation of the hydrated bacterial extracellular polymers during the traditional dehydration-based sample preparation that resulted in their collapse into filamentous structures. Dehydration-induced polymer collapse can lead to inaccurate spatial relationships and hence could subsequently affect conclusions regarding the nature of interactions between microbial extracellular polymers and their environment. PMID:21169451

  16. Photofragment image analysis using the Onion-Peeling Algorithm

    NASA Astrophysics Data System (ADS)

    Manzhos, Sergei; Loock, Hans-Peter

    2003-07-01

    With the growing popularity of the velocity map imaging technique, a need for the analysis of photoion and photoelectron images arose. Here, a computer program is presented that allows for the analysis of cylindrically symmetric images. It permits the inversion of the projection of the 3D charged particle distribution using the Onion Peeling Algorithm. Further analysis includes the determination of radial and angular distributions, from which velocity distributions and spatial anisotropy parameters are obtained. Identification and quantification of the different photolysis channels is therefore straightforward. In addition, the program features geometry correction, centering, and multi-Gaussian fitting routines, as well as a user-friendly graphical interface and the possibility of generating synthetic images using either the fitted or user-defined parameters. Program summaryTitle of program: Glass Onion Catalogue identifier: ADRY Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computer: IBM PC Operating system under which the program has been tested: Windows 98, Windows 2000, Windows NT Programming language used: Delphi 4.0 Memory required to execute with typical data: 18 Mwords No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 9 911 434 Distribution format: zip file Keywords: Photofragment image, onion peeling, anisotropy parameters Nature of physical problem: Information about velocity and angular distributions of photofragments is the basis on which the analysis of the photolysis process resides. Reconstructing the three-dimensional distribution from the photofragment image is the first step, further processing involving angular and radial integration of the inverted image to obtain velocity and angular distributions. Provisions have to be made to correct for slight distortions of the image, and to

  17. Imaging system sensitivity analysis with NV-IPM

    NASA Astrophysics Data System (ADS)

    Fanning, Jonathan; Teaney, Brian

    2014-05-01

    This paper describes the sensitivity analysis capabilities to be added to version 1.2 of the NVESD imaging sensor model NV-IPM. Imaging system design always involves tradeoffs to design the best system possible within size, weight, and cost constraints. In general, the performance of a well designed system will be limited by the largest, heaviest, and most expensive components. Modeling is used to analyze system designs before the system is built. Traditionally, NVESD models were only used to determine the performance of a given system design. NV-IPM has the added ability to automatically determine the sensitivity of any system output to changes in the system parameters. The component-based structure of NV-IPM tracks the dependence between outputs and inputs such that only the relevant parameters are varied in the sensitivity analysis. This allows sensitivity analysis of an output such as probability of identification to determine the limiting parameters of the system. Individual components can be optimized by doing sensitivity analysis of outputs such as NETD or SNR. This capability will be demonstrated by analyzing example imaging systems.

  18. Digital Transplantation Pathology: Combining Whole Slide Imaging, Multiplex Staining, and Automated Image Analysis

    PubMed Central

    Isse, Kumiko; Lesniak, Andrew; Grama, Kedar; Roysam, Badrinath; Minervini, Martha I.; Demetris, Anthony J

    2013-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. “-Omics” analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: a) spatial-temporal relationships; b) rare events/cells; c) complex structural context; and d) integration into a “systems” model. Nevertheless, except for immunostaining, no transformative advancements have “modernized” routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology - global “–omic” analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. PMID:22053785

  19. Enhanced Processing and Analysis of Cassini SAR Images of Titan

    NASA Astrophysics Data System (ADS)

    Lucas, A.; Aharonson, O.; Hayes, A. G.; Deledalle, C. A.; Kirk, R. L.

    2011-12-01

    SAR images suffer from speckle noise, which hinders interpretation and quantitative analysis. We have adapted a non-local algorithm for de-noising images using an appropriate multiplicative noise model [1] for analysis of Cassini SAR images. We illustrate some examples here that demonstrate the improvement of landform interpretation by focusing on transport processes at Titan's surface. Interpretation of the geomorphic features is facilitated (Figure 1); including revealing details of the channels incised into the terrain, shoreline morphology, and contrast variations in the dark, liquid covered areas. The latter are suggestive of sub-marine channels and gradients in the bathymetry. Furthermore, substantial quantitative improvements are possible. We show that a derived Digital Elevation Model from radargrammetry [2] using the de-noised images is obtained with a greater number of matching points (up to 80%) and a better correlation (59% of the pixels give a good correlation in the de-noised data compared with 18% in the original SAR image). An elevation hypsogram of our enhanced DEM shows evidence that fluvial and/or lacustrine processes have affected the topographic distribution substantially. Dune wavelengths and interdune extents are more precisely measured. Finally, radarclinometry technics applied to our new data are more accurate in dunes and mountainous regions. [1] Deledalle C-A., et al., 2009, Weighted maximum likelihood denoising with iterative and probabilistic patch-based weights, Telecom Paris. [2] Kirk, R.L., et al., 2007, First stereoscopic radar images of Titan, Lunar Planet. Sci., XXXVIII, Abstract #1427, Lunar and Planetary Institute, Houston

  20. Measurements and analysis in imaging for biomedical applications

    NASA Astrophysics Data System (ADS)

    Hoeller, Timothy L.

    2009-02-01

    A Total Quality Management (TQM) approach can be used to analyze data from biomedical optical and imaging platforms of tissues. A shift from individuals to teams, partnerships, and total participation are necessary from health care groups for improved prognostics using measurement analysis. Proprietary measurement analysis software is available for calibrated, pixel-to-pixel measurements of angles and distances in digital images. Feature size, count, and color are determinable on an absolute and comparative basis. Although changes in images of histomics are based on complex and numerous factors, the variation of changes in imaging analysis to correlations of time, extent, and progression of illness can be derived. Statistical methods are preferred. Applications of the proprietary measurement software are available for any imaging platform. Quantification of results provides improved categorization of illness towards better health. As health care practitioners try to use quantified measurement data for patient diagnosis, the techniques reported can be used to track and isolate causes better. Comparisons, norms, and trends are available from processing of measurement data which is obtained easily and quickly from Scientific Software and methods. Example results for the class actions of Preventative and Corrective Care in Ophthalmology and Dermatology, respectively, are provided. Improved and quantified diagnosis can lead to better health and lower costs associated with health care. Systems support improvements towards Lean and Six Sigma affecting all branches of biology and medicine. As an example for use of statistics, the major types of variation involving a study of Bone Mineral Density (BMD) are examined. Typically, special causes in medicine relate to illness and activities; whereas, common causes are known to be associated with gender, race, size, and genetic make-up. Such a strategy of Continuous Process Improvement (CPI) involves comparison of patient results

  1. GRETNA: a graph theoretical network analysis toolbox for imaging connectomics

    PubMed Central

    Wang, Jinhui; Wang, Xindi; Xia, Mingrui; Liao, Xuhong; Evans, Alan; He, Yong

    2015-01-01

    Recent studies have suggested that the brain’s structural and functional networks (i.e., connectomics) can be constructed by various imaging technologies (e.g., EEG/MEG; structural, diffusion and functional MRI) and further characterized by graph theory. Given the huge complexity of network construction, analysis and statistics, toolboxes incorporating these functions are largely lacking. Here, we developed the GRaph thEoreTical Network Analysis (GRETNA) toolbox for imaging connectomics. The GRETNA contains several key features as follows: (i) an open-source, Matlab-based, cross-platform (Windows and UNIX OS) package with a graphical user interface (GUI); (ii) allowing topological analyses of global and local network properties with parallel computing ability, independent of imaging modality and species; (iii) providing flexible manipulations in several key steps during network construction and analysis, which include network node definition, network connectivity processing, network type selection and choice of thresholding procedure; (iv) allowing statistical comparisons of global, nodal and connectional network metrics and assessments of relationship between these network metrics and clinical or behavioral variables of interest; and (v) including functionality in image preprocessing and network construction based on resting-state functional MRI (R-fMRI) data. After applying the GRETNA to a publicly released R-fMRI dataset of 54 healthy young adults, we demonstrated that human brain functional networks exhibit efficient small-world, assortative, hierarchical and modular organizations and possess highly connected hubs and that these findings are robust against different analytical strategies. With these efforts, we anticipate that GRETNA will accelerate imaging connectomics in an easy, quick and flexible manner. GRETNA is freely available on the NITRC website.1 PMID:26175682

  2. Analysis of Cultural Heritage by Accelerator Techniques and Analytical Imaging

    NASA Astrophysics Data System (ADS)

    Ide-Ektessabi, Ari; Toque, Jay Arre; Murayama, Yusuke

    2011-12-01

    In this paper we present the result of experimental investigation using two very important accelerator techniques: (1) synchrotron radiation XRF and XAFS; and (2) accelerator mass spectrometry and multispectral analytical imaging for the investigation of cultural heritage. We also want to introduce a complementary approach to the investigation of artworks which is noninvasive and nondestructive that can be applied in situ. Four major projects will be discussed to illustrate the potential applications of these accelerator and analytical imaging techniques: (1) investigation of Mongolian Textile (Genghis Khan and Kublai Khan Period) using XRF, AMS and electron microscopy; (2) XRF studies of pigments collected from Korean Buddhist paintings; (3) creating a database of elemental composition and spectral reflectance of more than 1000 Japanese pigments which have been used for traditional Japanese paintings; and (4) visible light-near infrared spectroscopy and multispectral imaging of degraded malachite and azurite. The XRF measurements of the Japanese and Korean pigments could be used to complement the results of pigment identification by analytical imaging through spectral reflectance reconstruction. On the other hand, analysis of the Mongolian textiles revealed that they were produced between 12th and 13th century. Elemental analysis of the samples showed that they contained traces of gold, copper, iron and titanium. Based on the age and trace elements in the samples, it was concluded that the textiles were produced during the height of power of the Mongol empire, which makes them a valuable cultural heritage. Finally, the analysis of the degraded and discolored malachite and azurite demonstrates how multispectral analytical imaging could be used to complement the results of high energy-based techniques.

  3. Radar image sequence analysis of inhomogeneous water surfaces

    NASA Astrophysics Data System (ADS)

    Seemann, Joerg; Senet, Christian M.; Dankert, Heiko; Hatten, Helge; Ziemer, Friedwart

    1999-10-01

    The radar backscatter from the ocean surface, called sea clutter, is modulated by the surface wave field. A method was developed to estimate the near-surface current, the water depth and calibrated surface wave spectra from nautical radar image sequences. The algorithm is based on the three- dimensional Fast Fourier Transformation (FFT) of the spatio- temporal sea clutter pattern in the wavenumber-frequency domain. The dispersion relation is used to define a filter to separate the spectral signal of the imaged waves from the background noise component caused by speckle noise. The signal-to-noise ratio (SNR) contains information about the significant wave height. The method has been proved to be reliable for the analysis of homogeneous water surfaces in offshore installations. Radar images are inhomogeneous because of the dependency of the image transfer function (ITF) on the azimuth angle between the wave propagation and the antenna viewing direction. The inhomogeneity of radar imaging is analyzed using image sequences of a homogeneous deep-water surface sampled by a ship-borne radar. Changing water depths in shallow-water regions induce horizontal gradients of the tidal current. Wave refraction occurs due to the spatial variability of the current and water depth. These areas cannot be investigated with the standard method. A new method, based on local wavenumber estimation with the multiple-signal classification (MUSIC) algorithm, is outlined. The MUSIC algorithm provides superior wavenumber resolution on local spatial scales. First results, retrieved from a radar image sequence taken from an installation at a coastal site, are presented.

  4. DPABI: Data Processing & Analysis for (Resting-State) Brain Imaging.

    PubMed

    Yan, Chao-Gan; Wang, Xin-Di; Zuo, Xi-Nian; Zang, Yu-Feng

    2016-07-01

    Brain imaging efforts are being increasingly devoted to decode the functioning of the human brain. Among neuroimaging techniques, resting-state fMRI (R-fMRI) is currently expanding exponentially. Beyond the general neuroimaging analysis packages (e.g., SPM, AFNI and FSL), REST and DPARSF were developed to meet the increasing need of user-friendly toolboxes for R-fMRI data processing. To address recently identified methodological challenges of R-fMRI, we introduce the newly developed toolbox, DPABI, which was evolved from REST and DPARSF. DPABI incorporates recent research advances on head motion control and measurement standardization, thus allowing users to evaluate results using stringent control strategies. DPABI also emphasizes test-retest reliability and quality control of data processing. Furthermore, DPABI provides a user-friendly pipeline analysis toolkit for rat/monkey R-fMRI data analysis to reflect the rapid advances in animal imaging. In addition, DPABI includes preprocessing modules for task-based fMRI, voxel-based morphometry analysis, statistical analysis and results viewing. DPABI is designed to make data analysis require fewer manual operations, be less time-consuming, have a lower skill requirement, a smaller risk of inadvertent mistakes, and be more comparable across studies. We anticipate this open-source toolbox will assist novices and expert users alike and continue to support advancing R-fMRI methodology and its application to clinical translational studies. PMID:27075850

  5. Automated target recognition technique for image segmentation and scene analysis

    NASA Astrophysics Data System (ADS)

    Baumgart, Chris W.; Ciarcia, Christopher A.

    1994-03-01

    Automated target recognition (ATR) software has been designed to perform image segmentation and scene analysis. Specifically, this software was developed as a package for the Army's Minefield and Reconnaissance and Detector (MIRADOR) program. MIRADOR is an on/off road, remote control, multisensor system designed to detect buried and surface- emplaced metallic and nonmetallic antitank mines. The basic requirements for this ATR software were the following: (1) an ability to separate target objects from the background in low signal-noise conditions; (2) an ability to handle a relatively high dynamic range in imaging light levels; (3) the ability to compensate for or remove light source effects such as shadows; and (4) the ability to identify target objects as mines. The image segmentation and target evaluation was performed using an integrated and parallel processing approach. Three basic techniques (texture analysis, edge enhancement, and contrast enhancement) were used collectively to extract all potential mine target shapes from the basic image. Target evaluation was then performed using a combination of size, geometrical, and fractal characteristics, which resulted in a calculated probability for each target shape. Overall results with this algorithm were quite good, though there is a tradeoff between detection confidence and the number of false alarms. This technology also has applications in the areas of hazardous waste site remediation, archaeology, and law enforcement.

  6. Biofilm monitoring on rotating discs by image analysis.

    PubMed

    Pons, Marie-Noëlle; Milferstedt, Kim; Morgenroth, Eberhard

    2009-05-01

    The macrostructure development of biofilms grown in a lab-scale rotating biological contactor was monitored by analyzing the average opacity and the texture of gray-level images of the discs. The reactor was fed with municipal or synthetic wastewater. Experiments lasted on average 4-14 weeks. The images were obtained with a flat-bed scanner. The opacity and its standard deviation are directly extracted from the annular zone where the biofilm develops. This zone is defined by the outer edge of the disc and the waterline. The spatial gray-level dependence matrix (SGLDM) approach was used for the texture assessment. As this method requires rectangular images, a geometrical transformation had to be developed to transform the ring into a workable area. This transformation now allows quantitative image analysis on circular biofilms. As a last step, Principal Components Analysis was applied to the set of textural descriptors to reduce the number of textural parameters. Opacity and textural information allowed the non-intrusive monitoring of the growth/regrowth of the biofilms as well as biofilm loss, due to detachment, auto-digestion, or protozoan grazing. Textural description was very valuable by helping to discriminate biofilms of similar opacity characteristics but presenting different macrostructures. PMID:19132742

  7. Imaging interferometers for analysis of Thomson scattered spectraa)

    NASA Astrophysics Data System (ADS)

    Howard, J.; Hatae, T.

    2008-10-01

    Polarization interferometers have some potential efficiency advantages for imaging Thomson scattering spectral analysis. In this article we present a number of designs for high-efficiency imaging polarization interferometers for Thomson scattering spectral analysis. The use of high-efficiency crystal polarizing beamsplitters (both displacement and angle) results in low-loss complementary passbands (no edge losses), simple imaging systems, and wide field of view. The efficiency and relative merits of both multiple-filter and dispersive-type configurations are being assessed before installation on the JT-60U ruby-laser Thomson scattering system. Light is transferred from the viewing port via a linear array of optical fiber bundles which will be imaged through the interferometer onto the photocathode of an intensified charge coupled device camera. Because of the broadband nature of the Thomson light, the optical delays required to Fourier analyze the spectrum are quite small. This leads to compact multicolor or dispersive systems based on combinations of Wollaston and Savart splitters and traditional waveplates.

  8. Automated radar image analysis research in support of military needs

    NASA Astrophysics Data System (ADS)

    Rohde, Frederick W.; Chen, Pi-Fuay; Hevenor, Richard A.

    1986-10-01

    Synthetic aperture radars (SAR) are high resolution radars that can be used for reconnaissance, surveillance, and terrain analysis. The high resolution in range and azimuth is achieved by pulse compression and phase history processing, respectively. SAR images have much in common with optical images such as aerial photographs. Both are characterized by tones, patterns, shapes, and shadows. There are, however, significant differences between SAR and optical images due to the differences in the wavelengths and in the illumination and reflection of the targets. Cloud cover presents an obstacle to optical imagery but not to SAR imagery because radar waves can penetrate cloud cover. Optical imagery provides more detailed information than SAR imagery because of its higher resolution. The resolution of optical imagery decreases with distance whereas the resolution of SAR imagery is independent of distance. For large distances, for example from satellites to the surface of the Earth, the resolution of SAR imagery approaches the resolution of optical imagery. These properties make SAR a very useful tool for military purposes. SAR systems can collect large quantities of imagery. For the timely and economic analysis and interpretation of SAR imagery there is a need for the development of automated and interactive capabilities that will reduce the dependency on and requirements for highly trained image analysts.

  9. Eigenvector spatial filtering for image analysis: An efficient algorithm

    NASA Astrophysics Data System (ADS)

    Rura, Melissa J.

    Eigenvector Spatial Filtering (ESF) is an established method in social science literature for incorporating spatial information in model specifications. ESF computes spatial eigenvectors, which are defined by the spatial structure associated with a variable. One important limitation of this technique is that it becomes computationally intensive in image analysis because of the massive number of image pixels. This research develops an algorithm, which makes ESF more efficient, by using the analytical solution for the eigenvalues and spatial eigenvectors, which are essentially a series of orthogonal, uncorrelated map patterns that describe positively spatial autocorrelated patterns through negatively spatially autocorrelated patterns, and global, regional, and local patterns of spatial dependencies in a surface. A reformulation of the analytical solution reduces the required computations and allows the eigenvectors to be computed sequentially. Finally, a series of sampling methods are explored. This algorithm is applied to three example multispectral images of different sizes: small (i.e., ˜200,000 pixels), medium (i.e., ˜1,000,000 pixels) and large (i.e., ˜110,000,000 pixels) and is evaluated in terms of output for each sampling technique and the complete spectral information. The output spatial filters of these sampling techniques compare to the filter generated with the complete spectral information. In terms of efficiency evaluation, the time is required to construct filters through sampling versus through analysis of the complete image surface is evaluated and the complexity of set-up and execution of the sampled and distributed algorithms are assessed.

  10. jSIPRO - analysis tool for magnetic resonance spectroscopic imaging.

    PubMed

    Jiru, Filip; Skoch, Antonin; Wagnerova, Dita; Dezortova, Monika; Hajek, Milan

    2013-10-01

    Magnetic resonance spectroscopic imaging (MRSI) involves a huge number of spectra to be processed and analyzed. Several tools enabling MRSI data processing have been developed and widely used. However, the processing programs primarily focus on sophisticated spectra processing and offer limited support for the analysis of the calculated spectroscopic maps. In this paper the jSIPRO (java Spectroscopic Imaging PROcessing) program is presented, which is a java-based graphical interface enabling post-processing, viewing, analysis and result reporting of MRSI data. Interactive graphical processing as well as protocol controlled batch processing are available in jSIPRO. jSIPRO does not contain a built-in fitting program. Instead, it makes use of fitting programs from third parties and manages the data flows. Currently, automatic spectra processing using LCModel, TARQUIN and jMRUI programs are supported. Concentration and error values, fitted spectra, metabolite images and various parametric maps can be viewed for each calculated dataset. Metabolite images can be exported in the DICOM format either for archiving purposes or for the use in neurosurgery navigation systems. PMID:23870172

  11. Error analysis of large aperture static interference imaging spectrometer

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zhang, Guo

    2015-12-01

    Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

  12. Within-subject template estimation for unbiased longitudinal image analysis

    PubMed Central

    Reuter, Martin; Schmansky, Nicholas J.; Rosas, H. Diana; Fischl, Bruce

    2012-01-01

    Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects. PMID:22430496

  13. Analysis and design of a refractive virtual image system

    NASA Technical Reports Server (NTRS)

    Kahlbaum, W. M.

    1977-01-01

    The optical performance of a virtual image display system is evaluated. Observation of a two-element (unachromatized doublet) refractive system led to the conclusion that the major source of image degradation was lateral chromatic aberration. This conclusion was verified by computer analysis of the system. The lateral chromatic aberration is given in terms of the resolution of the phosphor dots on a standard shadow mask color cathode ray tube. Single wavelength considerations include: astigmatism, apparent image distance from the observer, binocular disparities and differences of angular magnification of the images presented to each of the observer's eyes. Where practical, these results are related to the performance of the human eye. All these techniques are applied to the previously mentioned doublet and a triplet refractive system. The triplet provides a 50-percent reduction in lateral chromatic aberration which was the design goal. Distortion was also reduced to a minimum over the field of view. The methods used in the design of the triplet are presented along with a method of relating classical aberration curves to image distance and binocular disparity.

  14. Real-time oriented image analysis in microcirculatory research

    NASA Astrophysics Data System (ADS)

    Pries, Axel R.; Eriksson, S. E.; Jepsen, H.

    1990-11-01

    A digital video image analysis system is presented which consists of a personal computer equipped with a real-time video digitizer and a graphic tablet controlled by a modular set of programs aimed at performing a number of real-time oriented measuring tasks in microcirculatory research. Such tasks comprize the continuous recording of vessel diameters flow velocities or light intensity profiles from video recordings obtained during intravital microscopy of the terminal vascular bed either in research animals or in human beings. Two outstanding features of the presented systems are (A) the spatial correlation module for velocity measurement and (B) the automatic background movement correction. A: The spatial correlation velocity measurement module combined with an asymmetric illumination or gating process for image generation allows measurement of flow velocities from video microscopic images up to 20 mm/sec. This is about 10 to 20 times faster than the maximum. velocities which can be measured using conventional video based techniques. B: The automatic background movement correction is designed to track translational movements of background image structures in a reference window in real time (with respect to the video system). The translational vector of the image background is then used to adjust the position of the individual measuring lines or windows used in the different application modules to their original position relative to the tissue structures which are investigated. Such an automatic real-time tracking system isvery often a

  15. Sensitivity analysis of imaging geometries for prostate diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaodong; Zhu, Timothy C.

    2008-02-01

    Endoscopic and interstitial diffuse optical tomography have been studied in clinical investigations for imaging prostate tissues, yet, there is no comprehensive comparison of how these two imaging geometries affect the quality of the reconstruction images. In this study, the effect of imaging geometry is investigated by comparing the cross-section of the Jacobian sensitivity matrix and reconstructed images for three-dimensional mathematical phantoms. Next, the effect of source-detector configurations and number of measurements in both geometries is evaluated using singular value analysis. The amount of information contained for each source-detector configuration and different number of measurements are compared. Further, the effect of different measurements strategies for 3D endoscopic and interstitial tomography is examined. The pros and cons of using the in-plane measurements and off-plane measurements are discussed. Results showed that the reconstruction in the interstitial geometry outperforms the endoscopic geometry when deeper anomalies are present. Eight sources 8 detectors and 6 sources 12 detectors are sufficient for 2D reconstruction with endoscopic and interstitial geometry respectively. For a 3D problem, the quantitative accuracy in the interstitial geometry is significantly improved using off-plane measurements but only slightly in the endoscopic geometry.

  16. Analysis of licensed South African diagnostic imaging equipment

    PubMed Central

    Kabongo, Joseph Mwamba; Nel, Susan; Pitcher, Richard Denys

    2015-01-01

    Introduction: Objective To conduct an analysis of all registered South Africa (SA) diagnostic radiology equipment, assess the number of equipment units per capita by imaging modality, and compare SA figures with published international data, in preparation for the introduction of national health insurance (NHI) in SA. Methods The SA Radiation Control Board's database of registered diagnostic radiology equipment was analysed by modality, province and healthcare sector. Access to services was reflected as number of units/million population, and compared with published international data. Results General X-ray units are the most equitably distributed and accessible resource (34.8/million). For fluoroscopy (6.6/million), mammography (4.96/million), computed tomography (5.0/million) and magnetic resonance imaging (2.9/million), there are at least 10-fold discrepancies between the least and best resourced provinces. Although SA's overall imaging capacity is well above that of other countries in sub-Saharan Africa, it is lower than that of all Organisation for Economic Co-operation and Development (OECD). While SA's radiological resources most closely approximate those of the United Kingdom, they are substantially lower than the UK. Conclusion SA access to radiological services is lower than that of any OECD country. For the NHI to achieve equitable access to diagnostic imaging for all citizens, SA will need a more homogeneous distribution of specialised radiological resources and customized imaging guidelines. PMID:26834910

  17. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  18. Optical coherence tomography imaging based on non-harmonic analysis

    NASA Astrophysics Data System (ADS)

    Cao, Xu; Hirobayashi, Shigeki; Chong, Changho; Morosawa, Atsushi; Totsuka, Koki; Suzuki, Takuya

    2009-11-01

    A new processing technique called Non-Harmonic Analysis (NHA) is proposed for OCT imaging. Conventional Fourier-Domain OCT relies on the FFT calculation which depends on the window function and length. Axial resolution is counter proportional to the frame length of FFT that is limited by the swept range of the swept source in SS-OCT, or the pixel counts of CCD in SD-OCT degraded in FD-OCT. However, NHA process is intrinsically free from this trade-offs; NHA can resolve high frequency without being influenced by window function or frame length of sampled data. In this study, NHA process is explained and applied to OCT imaging and compared with OCT images based on FFT. In order to validate the benefit of NHA in OCT, we carried out OCT imaging based on NHA with the three different sample of onion-skin,human-skin and pig-eye. The results show that NHA process can realize practical image resolution that is equivalent to 100nm swept range only with less than half-reduced wavelength range.

  19. Designing multistatic ultrasound imaging systems using software analysis

    NASA Astrophysics Data System (ADS)

    Lee, Michael; Singh, Rahul S.; Culjat, Martin O.; Stubbs, Scott; Natarajan, Shyam; Brown, Elliott R.; Grundfest, Warren S.; Lee, Hua

    2010-03-01

    This paper describes the method of using the finite-element analysis software, PZFlex, to direct the design of a novel ultrasound imaging system which uses conformal transducer arrays. Current challenges in ultrasound array technology, including 2D array processing, have motivated exploration into new data acquisition and reconstruction techniques. Ultimately, these efforts encourage a broader examination of the processes used to effectively validate new array configurations and image formation procedures. Commercial software available today is capable of efficiently and accurately modeling detailed operational aspects of customized arrays. Combining quality simulated data with prototyped reconstruction techniques presents a valuable tool for testing novel schemes before committing more costly resources. To investigate this practice, we modeled three 1D ultrasound arrays operating multistatically instead of by the conventional phased-array approach. They are: a simple linear array, a half-circle array with 180-degree coverage, and a full circular array for inward imaging. We present the process used to create unique array models in PZFlex, simulate operation and obtain data, and subsequently generate images by inputting data into a reconstruction algorithm in MATLAB. Further discussion describes the tested reconstruction algorithm and includes resulting images.

  20. Mapping Fire Severity Using Imaging Spectroscopy and Kernel Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Prasad, S.; Cui, M.; Zhang, Y.; Veraverbeke, S.

    2014-12-01

    Improved spatial representation of within-burn heterogeneity after wildfires is paramount to effective land management decisions and more accurate fire emissions estimates. In this work, we demonstrate feasibility and efficacy of airborne imaging spectroscopy (hyperspectral imagery) for quantifying wildfire burn severity, using kernel based image analysis techniques. Two different airborne hyperspectral datasets, acquired over the 2011 Canyon and 2013 Rim fire in California using the Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) sensor, were used in this study. The Rim Fire, covering parts of the Yosemite National Park started on August 17, 2013, and was the third largest fire in California's history. Canyon Fire occurred in the Tehachapi mountains, and started on September 4, 2011. In addition to post-fire data for both fires, half of the Rim fire was also covered with pre-fire images. Fire severity was measured in the field using Geo Composite Burn Index (GeoCBI). The field data was utilized to train and validate our models, wherein the trained models, in conjunction with imaging spectroscopy data were used for GeoCBI estimation wide geographical regions. This work presents an approach for using remotely sensed imagery combined with GeoCBI field data to map fire scars based on a non-linear (kernel based) epsilon-Support Vector Regression (e-SVR), which was used to learn the relationship between spectra and GeoCBI in a kernel-induced feature space. Classification of healthy vegetation versus fire-affected areas based on morphological multi-attribute profiles was also studied. The availability of pre- and post-fire imaging spectroscopy data over the Rim Fire provided a unique opportunity to evaluate the performance of bi-temporal imaging spectroscopy for assessing post-fire effects. This type of data is currently constrained because of limited airborne acquisitions before a fire, but will become widespread with future spaceborne sensors such as those on

  1. [Decomposition of Interference Hyperspectral Images Using Improved Morphological Component Analysis].

    PubMed

    Wen, Jia; Zhao, Jun-suo; Wang, Cai-ling; Xia, Yu-li

    2016-01-01

    As the special imaging principle of the interference hyperspectral image data, there are lots of vertical interference stripes in every frames. The stripes' positions are fixed, and their pixel values are very high. Horizontal displacements also exist in the background between the frames. This special characteristics will destroy the regular structure of the original interference hyperspectral image data, which will also lead to the direct application of compressive sensing theory and traditional compression algorithms can't get the ideal effect. As the interference stripes signals and the background signals have different characteristics themselves, the orthogonal bases which can sparse represent them will also be different. According to this thought, in this paper the morphological component analysis (MCA) is adopted to separate the interference stripes signals and background signals. As the huge amount of interference hyperspectral image will lead to glow iterative convergence speed and low computational efficiency of the traditional MCA algorithm, an improved MCA algorithm is also proposed according to the characteristics of the interference hyperspectral image data, the conditions of iterative convergence is improved, the iteration will be terminated when the error of the separated image signals and the original image signals are almost unchanged. And according to the thought that the orthogonal basis can sparse represent the corresponding signals but cannot sparse represent other signals, an adaptive update mode of the threshold is also proposed in order to accelerate the computational speed of the traditional MCA algorithm, in the proposed algorithm, the projected coefficients of image signals at the different orthogonal bases are calculated and compared in order to get the minimum value and the maximum value of threshold, and the average value of them is chosen as an optimal threshold value for the adaptive update mode. The experimental results prove that

  2. Semi-automated porosity identification from thin section images using image analysis and intelligent discriminant classifiers

    NASA Astrophysics Data System (ADS)

    Ghiasi-Freez, Javad; Soleimanpour, Iman; Kadkhodaie-Ilkhchi, Ali; Ziaii, Mansur; Sedighi, Mahdi; Hatampour, Amir

    2012-08-01

    Identification of different types of porosity within a reservoir rock is a functional parameter for reservoir characterization since various pore types play different roles in fluid transport and also, the pore spaces determine the fluid storage capacity of the reservoir. The present paper introduces a model for semi-automatic identification of porosity types within thin section images. To get this goal, a pattern recognition algorithm is followed. Firstly, six geometrical shape parameters of sixteen largest pores of each image are extracted using image analysis techniques. The extracted parameters and their corresponding pore types of 294 pores are used for training two intelligent discriminant classifiers, namely linear and quadratic discriminant analysis. The trained classifiers take the geometrical features of the pores to identify the type and percentage of five types of porosity, including interparticle, intraparticle, oomoldic, biomoldic, and vuggy in each image. The accuracy of classifiers is determined from two standpoints. Firstly, the predicted and measured percentages of each type of porosity are compared with each other. The results indicate reliable performance for predicting percentage of each type of porosity. In the second step, the precisions of classifiers for categorizing the pore spaces are analyzed. The classifiers also took a high acceptance score when used for individual recognition of pore spaces. The proposed methodology is a further promising application for petroleum geologists allowing statistical study of pore types in a rapid and accurate way.

  3. Image analysis for beef quality prediction from serial scan ultrasound images

    NASA Astrophysics Data System (ADS)

    Zhang, Hui L.; Wilson, Doyle E.; Rouse, Gene H.; Izquierdo, Mercedes M.

    1995-01-01

    The prediction of intramuscular fat (or marbling) of live beef animals using serially scanned ultrasound images was studied in this paper. Image analysis, both in gray scale intensity domain and in frequency spectrum domain, were used to extract image features of tissue characters to get useful parameters for prediction models. One, 2 and 3 order multi-variable prediction models were developed from randomly selected data sets and tested using the remained data sets. The comparisons of prediction results between using serially scanned images and only final scanned ones showed good improvement of prediction accuracy. The correlation of predicted percent fat and actual percent fat increase from .68 to .80 and from .72 to .76 separately for two groups of data, the R squares increase from .65 to .68 and from .68 to .72, and the root of mean square errors decrease from 1.70 to 1.52 and from 1.22 to 1.12 separately. This study indicates that serially obtained ultrasound images from live beef animals have good potential for improving the prediction accuracy of percent fat.

  4. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    PubMed

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |< 0.083 mm for translations and |mu| < 0.023 degrees for rotations. The precision sigma in x-, y-, and z-direction was 0.090, 0.077, and 0.220 mm for translations and 0.155 degrees , 0.243 degrees , and 0.074 degrees for rotations. Our results show that the accuracy and precision of in vitro IBRSA, performed under ideal laboratory conditions, are lower than in vitro standard RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications. PMID:17706656

  5. Bayesian Analysis Of HMI Solar Image Observables And Comparison To TSI Variations And MWO Image Observables

    NASA Astrophysics Data System (ADS)

    Parker, D. G.; Ulrich, R. K.; Beck, J.

    2014-12-01

    We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from May, 2010 to June, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables. The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment. Ulrich, R.K., Parker, D, Bertello, L. and

  6. Bayesian Analysis of Hmi Images and Comparison to Tsi Variations and MWO Image Observables

    NASA Astrophysics Data System (ADS)

    Parker, D. G.; Ulrich, R. K.; Beck, J.; Tran, T. V.

    2015-12-01

    We have previously applied the Bayesian automatic classification system AutoClass to solar magnetogram and intensity images from the 150 Foot Solar Tower at Mount Wilson to identify classes of solar surface features associated with variations in total solar irradiance (TSI) and, using those identifications, modeled TSI time series with improved accuracy (r > 0.96). (Ulrich, et al, 2010) AutoClass identifies classes by a two-step process in which it: (1) finds, without human supervision, a set of class definitions based on specified attributes of a sample of the image data pixels, such as magnetic field and intensity in the case of MWO images, and (2) applies the class definitions thus found to new data sets to identify automatically in them the classes found in the sample set. HMI high resolution images capture four observables-magnetic field, continuum intensity, line depth and line width-in contrast to MWO's two observables-magnetic field and intensity. In this study, we apply AutoClass to the HMI observables for images from June, 2010 to December, 2014 to identify solar surface feature classes. We use contemporaneous TSI measurements to determine whether and how variations in the HMI classes are related to TSI variations and compare the characteristic statistics of the HMI classes to those found from MWO images. We also attempt to derive scale factors between the HMI and MWO magnetic and intensity observables.The ability to categorize automatically surface features in the HMI images holds out the promise of consistent, relatively quick and manageable analysis of the large quantity of data available in these images. Given that the classes found in MWO images using AutoClass have been found to improve modeling of TSI, application of AutoClass to the more complex HMI images should enhance understanding of the physical processes at work in solar surface features and their implications for the solar-terrestrial environment.Ulrich, R.K., Parker, D, Bertello, L. and

  7. Energy flow: image correspondence approximation for motion analysis

    NASA Astrophysics Data System (ADS)

    Wang, Liangliang; Li, Ruifeng; Fang, Yajun

    2016-04-01

    We propose a correspondence approximation approach between temporally adjacent frames for motion analysis. First, energy map is established to represent image spatial features on multiple scales using Gaussian convolution. On this basis, energy flow at each layer is estimated using Gauss-Seidel iteration according to the energy invariance constraint. More specifically, at the core of energy invariance constraint is "energy conservation law" assuming that the spatial energy distribution of an image does not change significantly with time. Finally, energy flow field at different layers is reconstructed by considering different smoothness degrees. Due to the multiresolution origin and energy-based implementation, our algorithm is able to quickly address correspondence searching issues in spite of background noise or illumination variation. We apply our correspondence approximation method to motion analysis, and experimental results demonstrate its applicability.

  8. ON THE VERIFICATION AND VALIDATION OF GEOSPATIAL IMAGE ANALYSIS ALGORITHMS

    SciTech Connect

    Roberts, Randy S.; Trucano, Timothy G.; Pope, Paul A.; Aragon, Cecilia R.; Jiang , Ming; Wei, Thomas; Chilton, Lawrence; Bakel, A. J.

    2010-07-25

    Verification and validation (V&V) of geospatial image analysis algorithms is a difficult task and is becoming increasingly important. While there are many types of image analysis algorithms, we focus on developing V&V methodologies for algorithms designed to provide textual descriptions of geospatial imagery. In this paper, we present a novel methodological basis for V&V that employs a domain-specific ontology, which provides a naming convention for a domain-bounded set of objects and a set of named relationship between these objects. We describe a validation process that proceeds through objectively comparing benchmark imagery, produced using the ontology, with algorithm results. As an example, we describe how the proposed V&V methodology would be applied to algorithms designed to provide textual descriptions of facilities

  9. Dynamic chest image analysis: model-based ventilation study with pyramid images

    NASA Astrophysics Data System (ADS)

    Liang, Jianming; Jaervi, Timo; Kiuru, Aaro J.; Kormano, Martti; Svedstrom, Erkki; Virkki, Raimo

    1997-05-01

    The aim of the study 'dynamic chest image analysis' is to develop computing analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected at different phases of the respiratory/cardiac cycles. A multiresolutional method for ventilation study with an explicit ventilation model based on pyramid images is proposed in this paper. The ventilation model is sophisticated enough in coverage of both inhalation and exhalation phases, but also remains simple enough in model realization. This model plays a critical role in extracting accurate, geographic ventilation parameters; while the pyramid helps in understanding ventilation at multiple resolutions and speeding up the convergence process in optimization. A number of patients have been studied with a research prototype produced in MATLAB. The prototype has proven to be useful aid in dynamic pulmonary ventilation study. However, for clinical use, further work must be done in the future.

  10. Caries imaging by teeth (auto)luminescence spectral analysis

    NASA Astrophysics Data System (ADS)

    Jonusauskas, Gediminas; Abraham, Emmanuel; Oberle, Jean; Rulliere, Claude; Peli, Jean-Francoic; Dorignac, Georges

    2003-10-01

    We propose a new technique for caries imaging by the spectral analysis of teeth luminescence excited by the near UV light. This diagnostic/control method can be applied for the all optically accessible teeth surfaces. The photo-physical studies suggest that hydroxylapatite luminescence, excited in the near UV, comes from de-excitation of crystalline structure defects in interaction with charge donating/accepting en ironment.

  11. Software for visualization, analysis, and manipulation of laser scan images

    NASA Astrophysics Data System (ADS)

    Burnsides, Dennis B.

    1997-03-01

    The recent introduction of laser surface scanning to scientific applications presents a challenge to computer scientists and engineers. Full utilization of this two- dimensional (2-D) and three-dimensional (3-D) data requires advances in techniques and methods for data processing and visualization. This paper explores the development of software to support the visualization, analysis and manipulation of laser scan images. Specific examples presented are from on-going efforts at the Air Force Computerized Anthropometric Research and Design (CARD) Laboratory.

  12. Comparative analysis of imaging configurations and objectives for Fourier microscopy.

    PubMed

    Kurvits, Jonathan A; Jiang, Mingming; Zia, Rashid

    2015-11-01

    Fourier microscopy is becoming an increasingly important tool for the analysis of optical nanostructures and quantum emitters. However, achieving quantitative Fourier space measurements requires a thorough understanding of the impact of aberrations introduced by optical microscopes that have been optimized for conventional real-space imaging. Here we present a detailed framework for analyzing the performance of microscope objectives for several common Fourier imaging configurations. To this end, we model objectives from Nikon, Olympus, and Zeiss using parameters that were inferred from patent literature and confirmed, where possible, by physical disassembly. We then examine the aberrations most relevant to Fourier microscopy, including the alignment tolerances of apodization factors for different objective classes, the effect of magnification on the modulation transfer function, and vignetting-induced reductions of the effective numerical aperture for wide-field measurements. Based on this analysis, we identify an optimal objective class and imaging configuration for Fourier microscopy. In addition, the Zemax files for the objectives and setups used in this analysis have been made publicly available as a resource for future studies. PMID:26560923

  13. Dermoscopy analysis of RGB-images based on comparative features

    NASA Astrophysics Data System (ADS)

    Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Artemyev, Dmitry N.; Neretin, Evgeny Y.; Kozlov, Sergey V.

    2015-09-01

    In this paper, we propose an algorithm for color and texture analysis for dermoscopic images of human skin based on Haar wavelets, Local Binary Patterns (LBP) and Histogram Analysis. This approach is a modification of «7-point checklist» clinical method. Thus, that is an "absolute" diagnostic method because one is using only features extracted from tumor's ROI (Region of Interest), which can be selected manually and/or using a special algorithm. We propose additional features extracted from the same image for comparative analysis of tumor and healthy skin. We used Euclidean distance, Cosine similarity, and Tanimoto coefficient as comparison metrics between color and texture features extracted from tumor's and healthy skin's ROI separately. A classifier for separating melanoma images from other tumors has been built by SVM (Support Vector Machine) algorithm. Classification's errors with and without comparative features between skin and tumor have been analyzed. Significant increase of recognition quality with comparative features has been demonstrated. Moreover, we analyzed two modes (manual and automatic) for ROI selecting on tumor and healthy skin areas. We have reached 91% of sensitivity using comparative features in contrast with 77% of sensitivity using the only "absolute" method. The specificity was the invariable (94%) in both cases.

  14. Image interpretation for landforms using expert systems and terrain analysis

    SciTech Connect

    Al-garni, A.M.

    1992-01-01

    Most current research in digital photogrammetry and computer vision concentrates on the early vision process; little research has been done on the late vision process. The late vision process in image interpretation contains descriptive knowledge and heuristic information which requires artificial intelligence (AI) in order for it to be modeled. This dissertation has introduced expert systems, as AI tools, and terrain analysis to the late vision process. This goal has been achieved by selecting and theorizing landforms as features for image interpretation. These features present a wide spectrum of knowledge that can furnish a good foundation for image interpretation processes. In this dissertation an EXpert system for LANdform interpretation using Terrain analysis (EXPLANT) was developed. EXPLANT can interpret the major landforms on earth. The system contains sample military and civilian consultations regarding site analysis and development. A learning mechanism was developed to accommodate necessary improvement for the data base due to the rapidly advancing and dynamically changing technology and knowledge. Many interface facilities and menu-driven screens were developed in the system to aid the users. Finally, the system has been tested and verified to be working properly.

  15. Geographic Object-Based Image Analysis - Towards a new paradigm

    NASA Astrophysics Data System (ADS)

    Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk

    2014-01-01

    The amount of scientific literature on (Geographic) Object-based Image Analysis - GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the 'per-pixel paradigm' and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm.

  16. Chlorophyll fluorescence analysis and imaging in plant stress and disease

    SciTech Connect

    Daley, P.F.

    1994-12-01

    Quantitative analysis of chlorophyll fluorescence transients and quenching has evolved rapidly in the last decade. Instrumentation capable of fluorescence detection in bright actinic light has been used in conjunction with gas exchange analysis to build an empirical foundation relating quenching parameters to photosynthetic electron transport, the state of the photoapparatus, and carbon fixation. We have developed several instruments that collect video images of chlorophyll fluorescence. Digitized versions of these images can be manipulated as numerical data arrays, supporting generation of quenching maps that represent the spatial distribution of photosynthetic activity in leaves. We have applied this technology to analysis of fluorescence quenching during application of stress hormones, herbicides, physical stresses including drought and sudden changes in humidity of the atmosphere surrounding leaves, and during stomatal oscillations in high CO{sub 2}. We describe a recently completed portable fluorescence imaging system utilizing LED illumination and a consumer-grade camcorder, that will be used in long-term, non-destructive field studies of plant virus infections.

  17. Bubble Counts for Rayleigh-Taylor Instability Using Image Analysis

    SciTech Connect

    Miller, P L; Gezahegne, A G; Cook, A W; Cabot, W H; Kamath, C

    2007-01-24

    We describe the use of image analysis to count bubbles in 3-D, large-scale, LES [1] and DNS [2] of the Rayleigh-Taylor instability. We analyze these massive datasets by first converting the 3-D data to 2-D, then counting the bubbles in the 2-D data. Our plots for the bubble count indicate there are four distinct regimes in the process of the mixing of the two fluids. We also show that our results are relatively insensitive to the choice of parameters in our analysis algorithms.

  18. Pathological diagnosis of bladder cancer by image analysis of hypericin induced fluorescence cystoscopic images

    NASA Astrophysics Data System (ADS)

    Kah, James C. Y.; Olivo, Malini C.; Lau, Weber K. O.; Sheppard, Colin J. R.

    2005-08-01

    Photodynamic diagnosis of bladder carcinoma based on hypericin fluorescence cystoscopy has shown to have a higher degree of sensitivity for the detection of flat bladder carcinoma compared to white light cystoscopy. The potential of the photosensitizer hypericin-induced fluorescence in performing non-invasive optical biopsy to grade bladder cancer in vivo using fluorescence cystoscopic image analysis without surgical resection for tissue biopsy is investigated in this study. The correlation between tissue fluorescence and histopathology of diseased tissue was explored and a diagnostic algorithm based on fluorescence image analysis was developed to classify the bladder cancer without surgical resection for tissue biopsy. Preliminary results suggest a correlation between tissue fluorescence and bladder cancer grade. By combining both the red-to-blue and red-to-green intensity ratios into a 2D scatter plot yields an average sensitivity and specificity of around 70% and 85% respectively for pathological cancer grading of the three different grades of bladder cancer. Therefore, the diagnostic algorithm based on colorimetric intensity ratio analysis of hypericin fluorescence cystoscopic images developed in this preliminary study shows promising potential to optically diagnose and grade bladder cancer in vivo.

  19. PHOG analysis of self-similarity in aesthetic images

    NASA Astrophysics Data System (ADS)

    Amirshahi, Seyed Ali; Koch, Michael; Denzler, Joachim; Redies, Christoph

    2012-03-01

    non-aesthetic categories of monochrome images. The aesthetic image datasets comprise a large variety of artworks of Western provenance. Other man-made aesthetically pleasing images, such as comics, cartoons and mangas, were also studied. For comparison, a database of natural scene photographs is used, as well as datasets of photographs of plants, simple objects and faces that are in general of low aesthetic value. As expected, natural scenes exhibit the highest degree of PHOG self-similarity. Images of artworks also show high selfsimilarity values, followed by cartoons, comics and mangas. On average, other (non-aesthetic) image categories are less self-similar in the PHOG analysis. A measure of scale-invariant self-similarity (PHOG) allows a good separation of the different aesthetic and non-aesthetic image categories. Our results provide further support for the notion that, like complex natural scenes, images of artworks display a higher degree of self-similarity across different scales of resolution than other image categories. Whether the high degree of self-similarity is the basis for the perception of beauty in both complex natural scenery and artworks remains to be investigated.

  20. Multifractal analysis of 2D gray soil images

    NASA Astrophysics Data System (ADS)

    González-Torres, Ivan; Losada, Juan Carlos; Heck, Richard; Tarquis, Ana M.

    2015-04-01

    Soil structure, understood as the spatial arrangement of soil pores, is one of the key factors in soil modelling processes. Geometric properties of individual and interpretation of the morphological parameters of pores can be estimated from thin sections or 3D Computed Tomography images (Tarquis et al., 2003), but there is no satisfactory method to binarized these images and quantify the complexity of their spatial arrangement (Tarquis et al., 2008, Tarquis et al., 2009; Baveye et al., 2010). The objective of this work was to apply a multifractal technique, their singularities (α) and f(α) spectra, to quantify it without applying any threshold (Gónzalez-Torres, 2014). Intact soil samples were collected from four horizons of an Argisol, formed on the Tertiary Barreiras group of formations in Pernambuco state, Brazil (Itapirema Experimental Station). The natural vegetation of the region is tropical, coastal rainforest. From each horizon, showing different porosities and spatial arrangements, three adjacent samples were taken having a set of twelve samples. The intact soil samples were imaged using an EVS (now GE Medical. London, Canada) MS-8 MicroCT scanner with 45 μm pixel-1 resolution (256x256 pixels). Though some samples required paring to fit the 64 mm diameter imaging tubes, field orientation was maintained. References Baveye, P.C., M. Laba, W. Otten, L. Bouckaert, P. Dello, R.R. Goswami, D. Grinev, A. Houston, Yaoping Hu, Jianli Liu, S. Mooney, R. Pajor, S. Sleutel, A. Tarquis, Wei Wang, Qiao Wei, Mehmet Sezgin. Observer-dependent variability of the thresholding step in the quantitative analysis of soil images and X-ray microtomography data. Geoderma, 157, 51-63, 2010. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Tarquis, A.M., R.J. Heck, J.B. Grau; J. Fabregat, M.E. Sanchez and J.M. Antón. Influence of Thresholding in Mass and Entropy Dimension of 3-D

  1. Computer-aided pulmonary image analysis in small animal models

    SciTech Connect

    Xu, Ziyue; Mansoor, Awais; Mollura, Daniel J.; Bagci, Ulas; Kramer-Marek, Gabriela; Luna, Brian; Kubler, Andre; Dey, Bappaditya; Jain, Sanjay; Foster, Brent; Papadakis, Georgios Z.; Camp, Jeremy V.; Jonsson, Colleen B.; Bishai, William R.; Udupa, Jayaram K.

    2015-07-15

    Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next. The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases.

  2. Computer-aided pulmonary image analysis in small animal models

    PubMed Central

    Xu, Ziyue; Bagci, Ulas; Mansoor, Awais; Kramer-Marek, Gabriela; Luna, Brian; Kubler, Andre; Dey, Bappaditya; Foster, Brent; Papadakis, Georgios Z.; Camp, Jeremy V.; Jonsson, Colleen B.; Bishai, William R.; Jain, Sanjay; Udupa, Jayaram K.; Mollura, Daniel J.

    2015-01-01

    Purpose: To develop an automated pulmonary image analysis framework for infectious lung diseases in small animal models. Methods: The authors describe a novel pathological lung and airway segmentation method for small animals. The proposed framework includes identification of abnormal imaging patterns pertaining to infectious lung diseases. First, the authors’ system estimates an expected lung volume by utilizing a regression function between total lung capacity and approximated rib cage volume. A significant difference between the expected lung volume and the initial lung segmentation indicates the presence of severe pathology, and invokes a machine learning based abnormal imaging pattern detection system next. The final stage of the proposed framework is the automatic extraction of airway tree for which new affinity relationships within the fuzzy connectedness image segmentation framework are proposed by combining Hessian and gray-scale morphological reconstruction filters. Results: 133 CT scans were collected from four different studies encompassing a wide spectrum of pulmonary abnormalities pertaining to two commonly used small animal models (ferret and rabbit). Sensitivity and specificity were greater than 90% for pathological lung segmentation (average dice similarity coefficient > 0.9). While qualitative visual assessments of airway tree extraction were performed by the participating expert radiologists, for quantitative evaluation the authors validated the proposed airway extraction method by using publicly available EXACT’09 data set. Conclusions: The authors developed a comprehensive computer-aided pulmonary image analysis framework for preclinical research applications. The proposed framework consists of automatic pathological lung segmentation and accurate airway tree extraction. The framework has high sensitivity and specificity; therefore, it can contribute advances in preclinical research in pulmonary diseases. PMID:26133591

  3. Multivariate image analysis of laser-induced photothermal imaging used for detection of caries tooth

    NASA Astrophysics Data System (ADS)

    El-Sherif, Ashraf F.; Abdel Aziz, Wessam M.; El-Sharkawy, Yasser H.

    2010-08-01

    Time-resolved photothermal imaging has been investigated to characterize tooth for the purpose of discriminating between normal and caries areas of the hard tissue using thermal camera. Ultrasonic thermoelastic waves were generated in hard tissue by the absorption of fiber-coupled Q-switched Nd:YAG laser pulses operating at 1064 nm in conjunction with a laser-induced photothermal technique used to detect the thermal radiation waves for diagnosis of human tooth. The concepts behind the use of photo-thermal techniques for off-line detection of caries tooth features were presented by our group in earlier work. This paper illustrates the application of multivariate image analysis (MIA) techniques to detect the presence of caries tooth. MIA is used to rapidly detect the presence and quantity of common caries tooth features as they scanned by the high resolution color (RGB) thermal cameras. Multivariate principal component analysis is used to decompose the acquired three-channel tooth images into a two dimensional principal components (PC) space. Masking score point clusters in the score space and highlighting corresponding pixels in the image space of the two dominant PCs enables isolation of caries defect pixels based on contrast and color information. The technique provides a qualitative result that can be used for early stage caries tooth detection. The proposed technique can potentially be used on-line or real-time resolved to prescreen the existence of caries through vision based systems like real-time thermal camera. Experimental results on the large number of extracted teeth as well as one of the thermal image panoramas of the human teeth voltanteer are investigated and presented.

  4. Wavelet analysis enables system-independent texture analysis of optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Lingley-Papadopoulos, Colleen A.; Loew, Murray H.; Zara, Jason M.

    2009-07-01

    Texture analysis for tissue characterization is a current area of optical coherence tomography (OCT) research. We discuss some of the differences between OCT systems and the effects those differences have on the resulting images and subsequent image analysis. In addition, as an example, two algorithms for the automatic recognition of bladder cancer are compared: one that was developed on a single system with no consideration for system differences, and one that was developed to address the issues associated with system differences. The first algorithm had a sensitivity of 73% and specificity of 69% when tested using leave-one-out cross-validation on data taken from a single system. When tested on images from another system with a different central wavelength, however, the method classified all images as cancerous regardless of the true pathology. By contrast, with the use of wavelet analysis and the removal of system-dependent features, the second algorithm reported sensitivity and specificity values of 87 and 58%, respectively, when trained on images taken with one imaging system and tested on images taken with another.

  5. Local masking in natural images: a database and analysis.

    PubMed

    Alam, Md Mushfiqul; Vilankar, Kedarnath P; Field, David J; Chandler, Damon M

    2014-01-01

    Studies of visual masking have provided a wide range of important insights into the processes involved in visual coding. However, very few of these studies have employed natural scenes as masks. Little is known on how the particular features found in natural scenes affect visual detection thresholds and how the results obtained using unnatural masks relate to the results obtained using natural masks. To address this issue, this paper describes a psychophysical study designed to obtain local contrast detection thresholds for a database of natural images. Via a three-alternative forced-choice experiment, we measured thresholds for detecting 3.7 cycles/° vertically oriented log-Gabor noise targets placed within an 85 × 85-pixels patch (1.9° patch) drawn from 30 natural images from the CSIQ image database (Larson & Chandler, Journal of Electronic Imaging, 2010). Thus, for each image, we obtained a masking map in which each entry in the map denotes the root mean squared contrast threshold for detecting the log-Gabor noise target at the corresponding spatial location in the image. From qualitative observations we found that detection thresholds were affected by several patch properties such as visual complexity, fineness of textures, sharpness, and overall luminance. Our quantitative analysis shows that except for the sharpness measure (correlation coefficient of 0.7), the other tested low-level mask features showed a weak correlation (correlation coefficients less than or equal to 0.52) with the detection thresholds. Furthermore, we evaluated the performance of a computational contrast gain control model that performed fairly well with an average correlation coefficient of 0.79 in predicting the local contrast detection thresholds. We also describe specific choices of parameters for the gain control model. The objective of this database is to provide researchers with a large ground-truth dataset in order to further investigate the properties of the human visual

  6. Analysis of Saturn's Polar Vortices with Cassini ISS Images

    NASA Astrophysics Data System (ADS)

    Sayanagi, Kunio M.; Dyudina, Ulyana A.; Ewald, Shawn P.; Ingersoll, Andrew P.

    2014-11-01

    We present new analyses of Saturn's north pole using high-resolution images captured in late 2012 by the Cassini spacecraft's Imaging Science Subsystem (ISS) camera. The new images reveal the presence of an intense cyclonic vortex centered at the north pole. In the red, green and near-IR methane continuum wavelengths, the north polar region exhibits a spiraling cloud morphology extending from 89 degree N to 85 degree N latitude, with a 4700 km radius. Images captured in the methane absorption bands, which sense upper tropospheric haze, show an approximately circular hole in the haze extending up to 1.5 degree latitude away from the pole. The spiraling morphology and the “eye”-like hole at the center are reminiscent of a terrestrial tropical cyclone. In the System III reference frame (rotation period of 10h39m22.4s, Seidelmann et al 2007), the eastward wind speed increases to about 140 m/s at 89 degree N planetocentric latitude. The vorticity peaks at the pole at (6.5±1.5)×10-4 s-1, and decreases to (1.3±1.2)×10-4 s-1 at 89 degree N. In addition, we present new analysis of Saturn's south polar vortex using images captured in January 2007 to compare its cloud morphology to the north pole. The south-polar images reveal an eye-like hole similar to that over the north pole; however, the reflectivity of the upper tropospheric haze is significantly higher over the south pole in 2007 than the north pole in 2012, perhaps exhibiting seasonal difference.

  7. Simultaneous Analysis and Quality Assurance for Diffusion Tensor Imaging

    PubMed Central

    Lauzon, Carolyn B.; Asman, Andrew J.; Esparza, Michael L.; Burns, Scott S.; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W.; Davis, Nicole; Cutting, Laurie E.; Landman, Bennett A.

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  8. Simultaneous analysis and quality assurance for diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Asman, Andrew J; Esparza, Michael L; Burns, Scott S; Fan, Qiuyun; Gao, Yurui; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E; Landman, Bennett A

    2013-01-01

    Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low

  9. Motion analysis of knee joint using dynamic volume images

    NASA Astrophysics Data System (ADS)

    Haneishi, Hideaki; Kohno, Takahiro; Suzuki, Masahiko; Moriya, Hideshige; Mori, Sin-ichiro; Endo, Masahiro

    2006-03-01

    Acquisition and analysis of three-dimensional movement of knee joint is desired in orthopedic surgery. We have developed two methods to obtain dynamic volume images of knee joint. One is a 2D/3D registration method combining a bi-plane dynamic X-ray fluoroscopy and a static three-dimensional CT, the other is a method using so-called 4D-CT that uses a cone-beam and a wide 2D detector. In this paper, we present two analyses of knee joint movement obtained by these methods: (1) transition of the nearest points between femur and tibia (2) principal component analysis (PCA) of six parameters representing the three dimensional movement of knee. As a preprocessing for the analysis, at first the femur and tibia regions are extracted from volume data at each time frame and then the registration of the tibia between different frames by an affine transformation consisting of rotation and translation are performed. The same transformation is applied femur as well. Using those image data, the movement of femur relative to tibia can be analyzed. Six movement parameters of femur consisting of three translation parameters and three rotation parameters are obtained from those images. In the analysis (1), axis of each bone is first found and then the flexion angle of the knee joint is calculated. For each flexion angle, the minimum distance between femur and tibia and the location giving the minimum distance are found in both lateral condyle and medial condyle. As a result, it was observed that the movement of lateral condyle is larger than medial condyle. In the analysis (2), it was found that the movement of the knee can be represented by the first three principal components with precision of 99.58% and those three components seem to strongly relate to three major movements of femur in the knee bend known in orthopedic surgery.

  10. Fast variogram analysis of remotely sensed images in HPC environment

    NASA Astrophysics Data System (ADS)

    Pesquer, Lluís; Cortés, Anna; Masó, Joan; Pons, Xavier

    2013-04-01

    Exploring and describing spatial variation of images is one of the main applications of geostatistics to remote sensing. The variogram is a very suitable tool to carry out this spatial pattern analysis. Variogram analysis is composed of two steps: empirical variogram generation and fitting a variogram model. The empirical variogram generation is a very quick procedure for most analyses of irregularly distributed samples, but time consuming increases quite significantly for remotely sensed images, because number of samples (pixels) involved is usually huge (more than 30 million for a Landsat TM scene), basically depending on extension and spatial resolution of images. In several remote sensing applications this type of analysis is repeated for each image, sometimes hundreds of scenes and sometimes for each radiometric band (high number in the case of hyperspectral images) so that there is a need for a fast implementation. In order to reduce this high execution time, we carried out a parallel solution of the variogram analyses. The solution adopted is the master/worker programming paradigm in which the master process distributes and coordinates the tasks executed by the worker processes. The code is written in ANSI-C language, including MPI (Message Passing Interface) as a message-passing library in order to communicate the master with the workers. This solution (ANSI-C + MPI) guarantees portability between different computer platforms. The High Performance Computing (HPC) environment is formed by 32 nodes, each with two Dual Core Intel(R) Xeon (R) 3.0 GHz processors with 12 Gb of RAM, communicated with integrated dual gigabit Ethernet. This IBM cluster is located in the research laboratory of the Computer Architecture and Operating Systems Department of the Universitat Autònoma de Barcelona. The performance results for a 15km x 15km subcene of 198-31 path-row Landsat TM image are shown in table 1. The proximity between empirical speedup behaviour and theoretical

  11. Perceptual security of encrypted images based on wavelet scaling analysis

    NASA Astrophysics Data System (ADS)

    Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.

    2016-08-01

    The scaling behavior of the pixel fluctuations of encrypted images is evaluated by using the detrended fluctuation analysis based on wavelets, a modern technique that has been successfully used recently for a wide range of natural phenomena and technological processes. As encryption algorithms, we use the Advanced Encryption System (AES) in RBT mode and two versions of a cryptosystem based on cellular automata, with the encryption process applied both fully and partially by selecting different bitplanes. In all cases, the results show that the encrypted images in which no understandable information can be visually appreciated and whose pixels look totally random present a persistent scaling behavior with the scaling exponent α close to 0.5, implying no correlation between pixels when the DFA with wavelets is applied. This suggests that the scaling exponents of the encrypted images can be used as a perceptual security criterion in the sense that when their values are close to 0.5 (the white noise value) the encrypted images are more secure also from the perceptual point of view.

  12. Automated retinal image analysis for diabetic retinopathy in telemedicine.

    PubMed

    Sim, Dawn A; Keane, Pearse A; Tufail, Adnan; Egan, Catherine A; Aiello, Lloyd Paul; Silva, Paolo S

    2015-03-01

    There will be an estimated 552 million persons with diabetes globally by the year 2030. Over half of these individuals will develop diabetic retinopathy, representing a nearly insurmountable burden for providing diabetes eye care. Telemedicine programmes have the capability to distribute quality eye care to virtually any location and address the lack of access to ophthalmic services. In most programmes, there is currently a heavy reliance on specially trained retinal image graders, a resource in short supply worldwide. These factors necessitate an image grading automation process to increase the speed of retinal image evaluation while maintaining accuracy and cost effectiveness. Several automatic retinal image analysis systems designed for use in telemedicine have recently become commercially available. Such systems have the potential to substantially improve the manner by which diabetes eye care is delivered by providing automated real-time evaluation to expedite diagnosis and referral if required. Furthermore, integration with electronic medical records may allow a more accurate prognostication for individual patients and may provide predictive modelling of medical risk factors based on broad population data. PMID:25697773

  13. Investigation of using bone texture analysis on bone densitometry images

    NASA Astrophysics Data System (ADS)

    Chinander, Michael R.; Giger, Maryellen L.; Shah, Ruchi D.; Vokes, Tamara

    2002-05-01

    We previously developed bone texture analysis methods to assess bone strength on digitized radiographs. Here, we compare the analyses performed on digitized screen-film to those obtained on peripheral bone densitometry images. A leg phantom was imaged with both a PIXI (GE Medical Systems; Milwaukee, WI) bone densitometer (0.200-mm pixel size) and a screen-film system, with the films being subsequently digitized by a laser film digitizer (0.100-mm pixel size). The phantom was radiographically scanned multiple times with the densitometer at the default parameters and for increasing exposure times. Fourier-based texture features were calculated from regions of interest from images from both modalities. The bone densitometry images contained more quantum noise than the radiographs resulting in increased values for the first moment of the power spectrum texture feature (1.22 times higher than from the standard radiograph). Presence of such noise may adversely affect the texture feature's ability to distinguish between strong and weak bone. By either increasing the exposure time or averaging multiple scans in the spatial frequency domain, we showed a reduction in the effect of the quantum mottle on the first moment of the power spectrum.

  14. Range accuracy analysis of streak tube imaging lidar systems

    NASA Astrophysics Data System (ADS)

    Ye, Guangchao; Fan, Rongwei; Chen, Zhaodong; Yuan, Wei; Chen, Deying; He, Ping

    2016-02-01

    Streak tube imaging lidar (STIL) is an active imaging system that has a high range accuracy and a wide range gate with the use of a pulsed laser transmitter and streak tube receiver to produce 3D range images. This work investigates the range accuracy performance of STIL systems based on a peak detection algorithm, taking into account the effects of blurring of the image. A theoretical model of the time-resolved signal distribution, including the static blurring width in addition to the laser pulse width, is presented, resulting in a modified range accuracy analysis. The model indicates that the static blurring width has a significant effect on the range accuracy, which is validated by both the simulation and experimental results. By using the optimal static blurring width, the range accuracies are enhanced in both indoor and outdoor experiments, with a stand-off distance of 10 m and 1700 m, respectively, and corresponding, best range errors of 0.06 m and 0.25 m were achieved in a daylight environment.

  15. Taylor impact tests on PBX composites: imaging and analysis

    NASA Astrophysics Data System (ADS)

    Graff Thompson, Daria; DeLuca, Racci; Archuleta, Jose; Brown, Geoff W.; Koby, Joseph

    2014-05-01

    A series of Taylor impact tests were performed on three plastic bonded explosive (PBX) formulations: PBX 9501, PBXN-9 and HPP (propellant). The first two formulations are HMX-based, and all three have been characterized quasi-statically in tension and compression. The Taylor impact tests use a 500 psi gas gun to launch PBX projectiles (approximately 30 grams, 16 mm diameter, 76 mm long), velocities as high as 215 m/s, at a steel anvil. Tests were performed remotely and no sign of ignition/reaction have been observed to date. Highspeed imaging was used to capture the impact of the specimen onto anvil surface. Side-view contour images have been analyzed using dynamic stress equations from the literature, and additionally, front-view images have been used to estimate a tensile strain failure criterion for initial specimen fracture. Post-test sieve analysis of specimen debris correlates fragmentation with projectile velocity, and these data show interesting differences between composites. Along with other quasi-static and dynamic measurements, Taylor impact images and fragmentation data provide a useful metric for the calibration or evaluation of intermediate-rate model predictions of PBX constituitive response and failure/fragmentation. Intermediate-rate tests involving other impact configurations are being considered.

  16. Taylor Impact Tests on PBX Composites: Imaging and Analysis

    NASA Astrophysics Data System (ADS)

    Thompson, Darla; Deluca, Racci

    2013-06-01

    A series of Taylor impact tests were performed on three plastic bonded explosive (PBX) formulations: PBX 9501, PBXN-9 and HPP (propellant). The first two formulations are HMX-based, and all three have been characterized quasi-statically in tension and compression. The Taylor impact tests use a 500 psi gas gun to launch PBX projectiles (approximately 30 grams, 16 mm diameter, 76 mm long) at velocities as high as 215 m/s. Tests were performed remotely and no sign of ignition/reaction have been observed to date. High-speed imaging was used to capture the impact of the specimen onto the surface of a steel anvil. Side-view contour images have been analyzed using dynamic stress equations from the literature, and additionally, front-view images have been used to estimate a tensile strain failure criterion for initial specimen fracture. Post-test sieve analysis of specimen debris correlates fragmentation with projectile velocity, and these data show interesting differences between composites. Along with other quasi-static and dynamic measurements, these impact images and fragmentation data provide a useful metric for the calibration or evaluation of intermediate-rate model predictions of PBX constituitive response and failure/fragmentation. Intermediate-rate tests involving other impact configurations are being considered.

  17. Dimensional analysis of blood vessel images in real time

    NASA Astrophysics Data System (ADS)

    Smith, Peter R.; Eustaquio-Martin, Almudena; Thomason, Harry; Bennett, M.; Thurston, H.

    1996-01-01

    The physiology and pathology of dissected blood vessels are studied by perfusion myography combined with video microscopy. Images of the vessels are formed under diffuse white light illumination and contrast is achieved by differential absorption with respect to the vessel wall. To obtain the vessel dimensional information in quasi real time an edge-tracking algorithm is used, allowing the edges to be found by applying common image processing tools to a very small number of pixels rather than the whole image. Employing a low order optical model of the light transmission properties of vessels with circular cross section, a relationship between the positions of edges found by a typical image processing algorithm and actual dimensions is derived. The dimensional analysis is demonstrated on rat mesenteric resistance arteries (internal diameter less than 300 micrometer) mounted in a perfusion arteriograph. Segments of vessels are secured on two glass cannulae using single strands of a nylon braided suture. The artery is perfused with physiological salt solution and the perfusion pressure maintained at 60 mmHg before starting the experiment. Changes in vascular diameter to the vasoconstrictor noradrenaline and the endothelium-dependent vasodilator acetylcholine were then observed.

  18. Difference image analysis of defocused observations with CSTAR

    SciTech Connect

    Oelkers, Ryan J.; Macri, Lucas M.; Wang, Lifan; Ashley, Michael C. B.; Lawrence, Jon S.; Luong-Van, Daniel; Cui, Xiangqun; Gong, Xuefei; Qiang, Liu; Yang, Huigen; Yuan, Xiangyan; Zhou, Xu; Feng, Long-Long; Zhu, Zhenxi; Pennypacker, Carl R.; York, Donald G.

    2015-02-01

    The Chinese Small Telescope ARray carried out high-cadence time-series observations of 27 square degrees centered on the South Celestial Pole during the Antarctic winter seasons of 2008–2010. Aperture photometry of the 2008 and 2010 i-band images resulted in the discovery of over 200 variable stars. Yearly servicing left the array defocused for the 2009 winter season, during which the system also suffered from intermittent frosting and power failures. Despite these technical issues, nearly 800,000 useful images were obtained using g, r, and clear filters. We developed a combination of difference imaging and aperture photometry to compensate for the highly crowded, blended, and defocused frames. We present details of this approach, which may be useful for the analysis of time-series data from other small-aperture telescopes regardless of their image quality. Using this approach, we were able to recover 68 previously known variables and detected variability in 37 additional objects. We also have determined the observing statistics for Dome A during the 2009 winter season; we find the extinction due to clouds to be less than 0.1 and 0.4 mag for 40% and 63% of the dark time, respectively.

  19. Live imaging and analysis of postnatal mouse retinal development

    PubMed Central

    2013-01-01

    Background The explanted, developing rodent retina provides an efficient and accessible preparation for use in gene transfer and pharmacological experimentation. Many of the features of normal development are retained in the explanted retina, including retinal progenitor cell proliferation, heterochronic cell production, interkinetic nuclear migration, and connectivity. To date, live imaging in the developing retina has been reported in non-mammalian and mammalian whole-mount samples. An integrated approach to rodent retinal culture/transfection, live imaging, cell tracking, and analysis in structurally intact explants greatly improves our ability to assess the kinetics of cell production. Results In this report, we describe the assembly and maintenance of an in vitro, CO2-independent, live mouse retinal preparation that is accessible by both upright and inverted, 2-photon or confocal microscopes. The optics of this preparation permit high-quality and multi-channel imaging of retinal cells expressing fluorescent reporters for up to 48h. Tracking of interkinetic nuclear migration within individual cells, and changes in retinal progenitor cell morphology are described. Follow-up, hierarchical cluster screening revealed that several different dependent variable measures can be used to identify and group movement kinetics in experimental and control samples. Conclusions Collectively, these methods provide a robust approach to assay multiple features of rodent retinal development using live imaging. PMID:23758927

  20. Material Science Image Analysis using Quant-CT in ImageJ

    SciTech Connect

    Ushizima, Daniela M.; Bianchi, Andrea G. C.; DeBianchi, Christina; Bethel, E. Wes

    2015-01-05

    We introduce a computational analysis workflow to access properties of solid objects using nondestructive imaging techniques that rely on X-ray imaging. The goal is to process and quantify structures from material science sample cross sections. The algorithms can differentiate the porous media (high density material) from the void (background, low density media) using a Boolean classifier, so that we can extract features, such as volume, surface area, granularity spectrum, porosity, among others. Our workflow, Quant-CT, leverages several algorithms from ImageJ, such as statistical region merging and 3D object counter. It also includes schemes for bilateral filtering that use a 3D kernel, for parallel processing of sub-stacks, and for handling over-segmentation using histogram similarities. The Quant-CT supports fast user interaction, providing the ability for the user to train the algorithm via subsamples to feed its core algorithms with automated parameterization. Quant-CT plugin is currently available for testing by personnel at the Advanced Light Source and Earth Sciences Divisions and Energy Frontier Research Center (EFRC), LBNL, as part of their research on porous materials. The goal is to understand the processes in fluid-rock systems for the geologic sequestration of CO2, and to develop technology for the safe storage of CO2 in deep subsurface rock formations. We describe our implementation, and demonstrate our plugin on porous material images. This paper targets end-users, with relevant information for developers to extend its current capabilities.

  1. Towards analysis of growth trajectory through multimodal longitudinal MR imaging

    NASA Astrophysics Data System (ADS)

    Sadeghi, Neda; Prastawa, Marcel; Gilmore, John H.; Lin, Weili; Gerig, Guido

    2010-03-01

    The human brain undergoes significant changes in the first few years after birth, but knowledge about this critical period of development is quite limited. Previous neuroimaging studies have been mostly focused on morphometric measures such as volume and shape, although tissue property measures related to the degree of myelination and axon density could also add valuable information to our understanding of brain maturation. Our goal is to complement brain growth analysis via morphometry with the study of longitudinal tissue property changes as reflected in patterns observed in multi-modal structural MRI and DTI. Our preliminary study includes eight healthy pediatric subjects with repeated scans at the age of two weeks, one year, and two years with T1, T2, PD, and DT MRI. Analysis is driven by the registration of multiple modalities and time points within and between subjects into a common coordinate frame, followed by image intensity normalization. Quantitative tractography with diffusion and structural image parameters serves for multi-variate tissue analysis. Different patterns of rapid changes were observed in the corpus callosum and the posterior and anterior internal capsule, structures known for distinctly different myelination growth. There are significant differences in central versus peripheral white matter. We demonstrate that the combined longitudinal analysis of structural and diffusion MRI proves superior to individual modalities and might provide a better understanding of the trajectory of early neurodevelopment.

  2. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.

    1998-01-01

    Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and

  3. Automated Imaging and Analysis of the Hemagglutination Inhibition Assay.

    PubMed

    Nguyen, Michael; Fries, Katherine; Khoury, Rawia; Zheng, Lingyi; Hu, Branda; Hildreth, Stephen W; Parkhill, Robert; Warren, William

    2016-04-01

    The hemagglutination inhibition (HAI) assay quantifies the level of strain-specific influenza virus antibody present in serum and is the standard by which influenza vaccine immunogenicity is measured. The HAI assay endpoint requires real-time monitoring of rapidly evolving red blood cell (RBC) patterns for signs of agglutination at a rate of potentially thousands of patterns per day to meet the throughput needs for clinical testing. This analysis is typically performed manually through visual inspection by highly trained individuals. However, concordant HAI results across different labs are challenging to demonstrate due to analyst bias and variability in analysis methods. To address these issues, we have developed a bench-top, standalone, high-throughput imaging solution that automatically determines the agglutination states of up to 9600 HAI assay wells per hour and assigns HAI titers to 400 samples in a single unattended 30-min run. Images of the tilted plates are acquired as a function of time and analyzed using algorithms that were developed through comprehensive examination of manual classifications. Concordance testing of the imaging system with eight different influenza antigens demonstrates 100% agreement between automated and manual titer determination with a percent difference of ≤3.4% for all cases. PMID:26464422

  4. Difference Image Analysis of De-Focused 2009 CSTAR Observations

    NASA Astrophysics Data System (ADS)

    Oelkers, Ryan J.; Macri, L. M.; Wang, L.; PLATO; CSTAR

    2014-01-01

    The Chinese Small Telescope ARray (CSTAR) carried out high-cadence time-series observations of a 27-square degree region centered on the South Celestial Pole during the Antarctic winter seasons of 2008, 2009 and 2010. Analysis of the 2008 and 2010 data using aperture photometry resulted in the discovery of 198 variables with i < 15.3 mag. Routine servicing completed after the 2008 winter season left the telescope out of focus for the 2009 winter season. The telescope also suffered a power loss after ~2 months of observation. In spite of the telescope's technical issues, nearly 250,000 usable images were taken in the 'g', 'N', and 'r' bands. We used a combination of difference imaging and aperture photometry to compensate for the highly crowded, blended and out of focus images. We are able to recover more than 100 of the variables in the 'g'-band and have discovered about 20 objects to be explored further. We discuss the analysis of all of these objects in the 'N' and 'r' bands as well. We also present the preliminary results of the application of this technique to another time-series data set taken from Tolar Grande, Argentina during the 2013 southern winter. Ryan J. Oelkers would like to acknowledge the support of a grant from the George P. and Cynthia Woods Mitchell Institute for Fundamental Physics & Astronomy.

  5. Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1985-01-01

    Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.

  6. Atlas of protein expression: image capture, analysis, and design of terabyte image database

    NASA Astrophysics Data System (ADS)

    Wu, Jiahua; Maslen, Gareth; Warford, Anthony; Griffin, Gareth; Xie, Jane; Crowther, Sandra; McCafferty, John

    2006-03-01

    The activity of genes in health and disease are manifested through the proteins which they encode. Ultimately, proteins drive functional processes in cells and tissues and so by measuring individual protein levels, studying modifications and discovering their sites of action we will understand better their function. It is possible to visualize the location of proteins of interest in tissue sections using labeled antibodies which bind to the target protein. This procedure, known as immunohistochemistry (IHC), provides valuable information on the cellular and sub-cellular distribution of proteins in tissue. The project, atlas of protein expression, aims to create a quality, information rich database of protein expression profiles, which is accessible to the world-wide research community. For the long term archival value of the data, the accompanying validated antibody and protein clones will potentially have great research, diagnostic and possibly therapeutic potential. To achieve this we had introduced a number of novel technologies, e.g. express recombinant proteins, select antibodies, stain proteins present in tissue section, and tissue microarray (TMA) image analysis. These are currently being optimized, automated and integrated into a multi-disciplinary production process. We had also created infrastructure for multi-terabyte scale image capture, established an image analysis capability for initial screening and quantization.

  7. Estimating transport properties of mortars using image analysis on backscattered electron images

    SciTech Connect

    Wong, H.S. . E-mail: hong.wong@imperial.ac.uk; Buenfeld, N.R.; Head, M.K.

    2006-08-15

    The pore structure of two ordinary Portland cement mortars at water-cement ratio of 0.35 and 0.70 was characterised using quantitative backscattered electron imaging. The mortars were cured and conditioned to produce a range of pore structure characteristics. Image analysis was used to characterise the pore structure in terms of simple morphological parameters such as resolvable porosity and the specific surface area. These were found to be correlated to measured transport coefficients (diffusivity, permeability and sorptivity), suggesting the feasibility of image analysis to derive valuable quantitative information describing the pore structure that can be used as input values for a transport prediction model. A simple analytical model incorporating tortuosity and constrictivity was used to predict oxygen diffusivity and a variant of the Kozeny-Carman model was used to predict oxygen permeability. The diffusion model tended to over-predict for the lower w/c ratio mortar, but the general agreement was reasonable, with 90% of the estimated values within a factor of two from the measured values. The modified Kozeny-Carman model, however, over-predicted all permeability values with an error of between half to one order of magnitude.

  8. Cutting-Edge Analysis of Extracellular Microparticles using ImageStreamX Imaging Flow Cytometry

    PubMed Central

    Headland, Sarah E.; Jones, Hefin R.; D'Sa, Adelina S. V.; Perretti, Mauro; Norling, Lucy V.

    2014-01-01

    Interest in extracellular vesicle biology has exploded in the past decade, since these microstructures seem endowed with multiple roles, from blood coagulation to inter-cellular communication in pathophysiology. In order for microparticle research to evolve as a preclinical and clinical tool, accurate quantification of microparticle levels is a fundamental requirement, but their size and the complexity of sample fluids present major technical challenges. Flow cytometry is commonly used, but suffers from low sensitivity and accuracy. Use of Amnis ImageStreamX Mk II imaging flow cytometer afforded accurate analysis of calibration beads ranging from 1 μm to 20 nm; and microparticles, which could be observed and quantified in whole blood, platelet-rich and platelet-free plasma and in leukocyte supernatants. Another advantage was the minimal sample preparation and volume required. Use of this high throughput analyzer allowed simultaneous phenotypic definition of the parent cells and offspring microparticles along with real time microparticle generation kinetics. With the current paucity of reliable techniques for the analysis of microparticles, we propose that the ImageStreamX could be used effectively to advance this scientific field. PMID:24913598

  9. Cutting-edge analysis of extracellular microparticles using ImageStream(X) imaging flow cytometry.

    PubMed

    Headland, Sarah E; Jones, Hefin R; D'Sa, Adelina S V; Perretti, Mauro; Norling, Lucy V

    2014-01-01

    Interest in extracellular vesicle biology has exploded in the past decade, since these microstructures seem endowed with multiple roles, from blood coagulation to inter-cellular communication in pathophysiology. In order for microparticle research to evolve as a preclinical and clinical tool, accurate quantification of microparticle levels is a fundamental requirement, but their size and the complexity of sample fluids present major technical challenges. Flow cytometry is commonly used, but suffers from low sensitivity and accuracy. Use of Amnis ImageStream(X) Mk II imaging flow cytometer afforded accurate analysis of calibration beads ranging from 1 μm to 20 nm; and microparticles, which could be observed and quantified in whole blood, platelet-rich and platelet-free plasma and in leukocyte supernatants. Another advantage was the minimal sample preparation and volume required. Use of this high throughput analyzer allowed simultaneous phenotypic definition of the parent cells and offspring microparticles along with real time microparticle generation kinetics. With the current paucity of reliable techniques for the analysis of microparticles, we propose that the ImageStream(X) could be used effectively to advance this scientific field. PMID:24913598

  10. Insight to Nanoparticle Size Analysis-Novel and Convenient Image Analysis Method Versus Conventional Techniques.

    PubMed

    Vippola, Minnamari; Valkonen, Masi; Sarlin, Essi; Honkanen, Mari; Huttunen, Heikki

    2016-12-01

    The aim of this paper is to introduce a new image analysis program "Nanoannotator" particularly developed for analyzing individual nanoparticles in transmission electron microscopy images. This paper describes the usefulness and efficiency of the program when analyzing nanoparticles, and at the same time, we compare it to more conventional nanoparticle analysis techniques. The techniques which we are concentrating here are transmission electron microscopy (TEM) linked with different image analysis methods and X-ray diffraction techniques. The developed program appeared as a good supplement to the field of particle analysis techniques, since the traditional image analysis programs suffer from the inability to separate the individual particles from agglomerates in the TEM images. The program is more efficient, and it offers more detailed morphological information of the particles than the manual technique. However, particle shapes that are very different from spherical proved to be problematic also for the novel program. When compared to X-ray techniques, the main advantage of the small-angle X-ray scattering (SAXS) method is the average data it provides from a very large amount of particles. However, the SAXS method does not provide any data about the shape or appearance of the sample. PMID:27030469

  11. Computer image analysis of etched tracks from ionizing radiation

    NASA Technical Reports Server (NTRS)

    Blanford, George E.

    1994-01-01

    I proposed to continue a cooperative research project with Dr. David S. McKay concerning image analysis of tracks. Last summer we showed that we could measure track densities using the Oxford Instruments eXL computer and software that is attached to an ISI scanning electron microscope (SEM) located in building 31 at JSC. To reduce the dependence on JSC equipment, we proposed to transfer the SEM images to UHCL for analysis. Last summer we developed techniques to use digitized scanning electron micrographs and computer image analysis programs to measure track densities in lunar soil grains. Tracks were formed by highly ionizing solar energetic particles and cosmic rays during near surface exposure on the Moon. The track densities are related to the exposure conditions (depth and time). Distributions of the number of grains as a function of their track densities can reveal the modality of soil maturation. As part of a consortium effort to better understand the maturation of lunar soil and its relation to its infrared reflectance properties, we worked on lunar samples 67701,205 and 61221,134. These samples were etched for a shorter time (6 hours) than last summer's sample and this difference has presented problems for establishing the correct analysis conditions. We used computer counting and measurement of area to obtain preliminary track densities and a track density distribution that we could interpret for sample 67701,205. This sample is a submature soil consisting of approximately 85 percent mature soil mixed with approximately 15 percent immature, but not pristine, soil.

  12. Image processing analysis of traditional Gestalt vision experiments

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2002-06-01

    In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.

  13. Patient-adaptive lesion metabolism analysis by dynamic PET images.

    PubMed

    Gao, Fei; Liu, Huafeng; Shi, Pengcheng

    2012-01-01

    Dynamic PET imaging provides important spatial-temporal information for metabolism analysis of organs and tissues, and generates a great reference for clinical diagnosis and pharmacokinetic analysis. Due to poor statistical properties of the measurement data in low count dynamic PET acquisition and disturbances from surrounding tissues, identifying small lesions inside the human body is still a challenging issue. The uncertainties in estimating the arterial input function will also limit the accuracy and reliability of the metabolism analysis of lesions. Furthermore, the sizes of the patients and the motions during PET acquisition will yield mismatch against general purpose reconstruction system matrix, this will also affect the quantitative accuracy of metabolism analyses of lesions. In this paper, we present a dynamic PET metabolism analysis framework by defining a patient adaptive system matrix to improve the lesion metabolism analysis. Both patient size information and potential small lesions are incorporated by simulations of phantoms of different sizes and individual point source responses. The new framework improves the quantitative accuracy of lesion metabolism analysis, and makes the lesion identification more precisely. The requirement of accurate input functions is also reduced. Experiments are conducted on Monte Carlo simulated data set for quantitative analysis and validation, and on real patient scans for assessment of clinical potential. PMID:23286175

  14. Analysis of interstellar fragmentation structure based on IRAS images

    NASA Technical Reports Server (NTRS)

    Scalo, John M.

    1989-01-01

    The goal of this project was to develop new tools for the analysis of the structure of densely sampled maps of interstellar star-forming regions. A particular emphasis was on the recognition and characterization of nested hierarchical structure and fractal irregularity, and their relation to the level of star formation activity. The panoramic IRAS images provided data with the required range in spatial scale, greater than a factor of 100, and in column density, greater than a factor of 50. In order to construct a densely sampled column density map of a cloud complex which is both self-gravitating and not (yet?) stirred up much by star formation, a column density image of the Taurus region has been constructed from IRAS data. The primary drawback to using the IRAS data for this purpose is that it contains no velocity information, and the possible importance of projection effects must be kept in mind.

  15. The MARTE VNIR Imaging Spectrometer Experiment: Design and Analysis

    NASA Astrophysics Data System (ADS)

    Brown, Adrian J.; Sutter, Brad; Dunagan, Stephen

    2008-10-01

    We report on the design, operation, and data analysis methods employed on the VNIR imaging spectrometer instrument that was part of the Mars Astrobiology Research and Technology Experiment (MARTE). The imaging spectrometer is a hyperspectral scanning pushbroom device sensitive to VNIR wavelengths from 400-1000 nm. During the MARTE project, the spectrometer was deployed to the Río Tinto region of Spain. We analyzed subsets of three cores from Río Tinto using a new band modeling technique. We found most of the MARTE drill cores to contain predominantly goethite, though spatially coherent areas of hematite were identified in Core 23. We also distinguished non Fe-bearing minerals that were subsequently analyzed by X-ray diffraction (XRD) and found to be primarily muscovite. We present drill core maps that include spectra of goethite, hematite, and non Fe-bearing minerals.

  16. Difference Image Analysis of Galactic Microlensing. II. Microlensing Events

    SciTech Connect

    Alcock, C.; Allsman, R. A.; Alves, D.; Axelrod, T. S.; Becker, A. C.; Bennett, D. P.; Cook, K. H.; Drake, A. J.; Freeman, K. C.; Griest, K.

    1999-09-01

    The MACHO collaboration has been carrying out difference image analysis (DIA) since 1996 with the aim of increasing the sensitivity to the detection of gravitational microlensing. This is a preliminary report on the application of DIA to galactic bulge images in one field. We show how the DIA technique significantly increases the number of detected lensing events, by removing the positional dependence of traditional photometry schemes and lowering the microlensing event detection threshold. This technique, unlike PSF photometry, gives the unblended colors and positions of the microlensing source stars. We present a set of criteria for selecting microlensing events from objects discovered with this technique. The 16 pixel and classical microlensing events discovered with the DIA technique are presented. (c) (c) 1999. The American Astronomical Society.

  17. Quantitative image analysis of WE43-T6 cracking behavior

    NASA Astrophysics Data System (ADS)

    Ahmad, A.; Yahya, Z.

    2013-06-01

    Environment-assisted cracking of WE43 cast magnesium (4.2 wt.% Yt, 2.3 wt.% Nd, 0.7% Zr, 0.8% HRE) in the T6 peak-aged condition was induced in ambient air in notched specimens. The mechanism of fracture was studied using electron backscatter diffraction, serial sectioning and in situ observations of crack propagation. The intermetallic (rare earthed-enriched divorced intermetallic retained at grain boundaries and predominantly at triple points) material was found to play a significant role in initiating cracks which leads to failure of this material. Quantitative measurements were required for this project. The populations of the intermetallic and clusters of intermetallic particles were analyzed using image analysis of metallographic images. This is part of the work to generate a theoretical model of the effect of notch geometry on the static fatigue strength of this material.

  18. Biomechanical cell analysis using quantitative phase imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wax, Adam; Park, Han Sang; Eldridge, William J.

    2016-03-01

    Quantitative phase imaging provides nanometer scale sensitivity and has been previously used to study spectral and temporal characteristics of individual cells in vitro, especially red blood cells. Here we extend this work to study the mechanical responses of individual cells due to the influence of external stimuli. Cell stiffness may be characterized by analyzing the inherent thermal fluctuations of cells but by applying external stimuli, additional information can be obtained. The time dependent response of cells due to external shear stress is examined with high speed quantitative phase imaging and found to exhibit characteristics that relate to their stiffness. However, analysis beyond the cellular scale also reveals internal organization of the cell and its modulation due to pathologic processes such as carcinogenesis. Further studies with microfluidic platforms point the way for using this approach in high throughput assays.

  19. Analysis of hydroelastic slamming through particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Panciroli, R.; Porfiri, M.

    2015-07-01

    Predicting the hydrodynamic loading experienced by lightweight structures during water impact is central to the design of marine vessels and aircraft. Here, hydroelastic effects of flexible panels during water entry are studied through particle image velocimetry. Experiments are conducted on a compliant wedge entering the water surface in free fall for varying entry velocities, and the pressure field is indirectly evaluated from particle image velocimetry. Results show that the impact is responsible for prominent multimodal vibrations of the wedge, and, vice versa, that the wedge flexibility strongly influences the hydrodynamic loading. With respect to rigid wedges, the hydrodynamic loading exhibits marked spatial variations, which control the location of the minimum and maximum pressure on the wetted surface, and temporal oscillations, which modulate the direction of the hydrodynamic force. These experimental results are expected to aid in refining computational schemes for the analysis of hydroelastic phenomena and provide guidelines for structural design.

  20. Disability in physical education textbooks: an analysis of image content.

    PubMed

    Táboas-Pais, María Inés; Rey-Cao, Ana

    2012-10-01

    The aim of this paper is to show how images of disability are portrayed in physical education textbooks for secondary schools in Spain. The sample was composed of 3,316 images published in 36 textbooks by 10 publishing houses. A content analysis was carried out using a coding scheme based on categories employed in other similar studies and adapted to the requirements of this study with additional categories. The variables were camera angle, gender, type of physical activity, field of practice, space, and level. Univariate and bivariate descriptive analyses were also carried out. The Pearson chi-square statistic was used to identify associations between the variables. Results showed a noticeable imbalance between people with disabilities and people without disabilities, and women with disabilities were less frequently represented than men with disabilities. People with disabilities were depicted as participating in a very limited variety of segregated, competitive, and elite sports activities. PMID:23027145

  1. Comparison of Spectral and Image Morphological Analysis for Egg Early Hatching Property Detection Based on Hyperspectral Imaging

    PubMed Central

    Zhang, Wei; Pan, Leiqing; Tu, Kang; Zhang, Qiang; Liu, Ming

    2014-01-01

    The use of non-destructive methods to detect egg hatching properties could increase efficiency in commercial hatcheries by saving space, reducing costs, and ensuring hatching quality. For this purpose, a hyperspectral imaging system was built to detect embryo development and vitality using spectral and morphological information of hatching eggs. A total of 150 green shell eggs were used, and hyperspectral images were collected for every egg on day 0, 1, 2, 3 and 4 of incubation. After imaging, two analysis methods were developed to extract egg hatching characteristic. Firstly, hyperspectral images of samples were evaluated using Principal Component Analysis (PCA) and only one optimal band with 822 nm was selected for extracting spectral characteristics of hatching egg. Secondly, an image segmentation algorithm was applied to isolate the image morphologic characteristics of hatching egg. To investigate the applicability of spectral and image morphological analysis for detecting egg early hatching properties, Learning Vector Quantization neural network (LVQNN) was employed. The experimental results demonstrated that model using image morphological characteristics could achieve better accuracy and generalization than using spectral characteristic parameters, and the discrimination accuracy for eggs with embryo development were 97% at day 3, 100% at day 4. In addition, the recognition results for eggs with weak embryo development reached 81% at day 3, and 92% at day 4. This study suggested that image morphological analysis was a novel application of hyperspectral imaging technology to detect egg early hatching properties. PMID:24551130

  2. Scanning ion images; analysis of pharmaceutical drugs at organelle levels

    NASA Astrophysics Data System (ADS)

    Larras-Regard, E.; Mony, M.-C.

    1995-05-01

    With the ion analyser IMS 4F used in microprobe mode, it is possible to obtain images of fields of 10 × 10 [mu]m2, corresponding to an effective magnification of 7000 with lateral resolution of 250 nm, technical characteristics that are appropriate for the size of cell organelles. It is possible to characterize organelles by their relative CN-, P- and S- intensities when the tissues are prepared by freeze fixation and freeze substitution. The recognition of organelles enables correlation of the tissue distribution of ebselen, a pharmaceutical drug containing selenium. The various metabolites characterized in plasma, bile and urine during biotransformation of ebselen all contain selenium, so the presence of the drug and its metabolites can be followed by images of Se. We were also able to detect the endogenous content of Se in tissue, due to the increased sensitivity of ion analysis in microprobe mode. Our results show a natural occurrence of Se in the border corresponding to the basal lamina of cells of proximal but not distal tubules of the kidney. After treatment of rats with ebselen, an additional site of Se is found in the lysosomes. We suggest that in addition to direct elimination of ebselen and its metabolites by glomerular filtration and urinary elimination, a second process of elimination may occur: Se compounds reaching the epithelial cells via the basal lamina accumulate in lysosomes prior to excretion into the tubular fluid. The technical developments of using the IMS 4F instrument in the microprobe mode and the improvement in preparation of samples by freeze fixation and substitution further extend the limit of ion analysis in biology. Direct imaging of trace elements and molecules marked with a tracer make it possible to determine their targets by comparison with images of subcellular structures. This is a promising advance in the study of pathways of compounds within tissues, cells and the whole organism.

  3. Use of nanotomographic images for structure analysis of carbonate rocks

    SciTech Connect

    Nagata, Rodrigo; Appoloni, Carlos Roberto

    2014-11-11

    Carbonate rocks store more than 50% of world's petroleum. These rocks' structures are highly complex and vary depending on many factors regarding their formation, e.g., lithification and diagenesis. In order to perform an effective extraction of petroleum it is necessary to know petrophysical parameters, such as total porosity, pore size and permeability of the reservoir rocks. Carbonate rocks usually have a range of pore sizes that goes from nanometers to meters or even dozen of meters. The nanopores and micropores might play an important role in the pores connectivity of carbonate rocks. X-ray computed tomography (CT) has been widely used to analyze petrophysical parameters in recent years. This technique has the capability to generate 2D images of the samples' inner structure and also allows the 3D reconstruction of the actual analyzed volume. CT is a powerful technique, but its results depend on the spatial resolution of the generated image. Spatial resolution is a measurement parameter that indicates the smallest object that can be detected. There are great difficulties to generate images with nanoscale resolution (nanotomographic images). In this work three carbonate rocks, one dolomite and two limestones (that will be called limestone A and limestone B) were analyzed by nanotomography. The measurements were performed with the SkyScan2011 nanotomograph, operated at 60 kV and 200 μA to measure the dolomite sample and 40 kV and 200 μA to measure the limestone samples. Each sample was measured with a given spatial resolution (270 nm for the dolomite sample, 360 nm for limestone A and 450 nm for limestone B). The achieved results for total porosity were: 3.09 % for dolomite, 0.65% for limestone A and 3.74% for limestone B. This paper reports the difficulties to acquire nanotomographic images and further analysis about the samples' pore sizes.

  4. Image Science and Analysis Group Spacecraft Damage Detection/Characterization

    NASA Technical Reports Server (NTRS)

    Wheaton, Ira M., Jr.

    2010-01-01

    This project consisted of several tasks that could be served by an intern to assist the ISAG in detecting damage to spacecrafts during missions. First, this project focused on supporting the Micrometeoroid Orbital Debris (MMOD) damage detection and assessment for the Hubble Space Telescope (HST) using imagery from the last two HST Shuttle servicing missions. In this project, we used coordinates of two windows on the Shuttle Aft flight deck from where images were taken and the coordinates of three ID points in order to calculate the distance from each window to the three points. Then, using the specifications from the camera used, we calculated the image scale in pixels per inch for planes parallel to and planes in the z-direction to the image plane (shown in Table 1). This will help in the future for calculating measurements of objects in the images. Next, tabulation and statistical analysis were conducted for screening results (shown in Table 2) of imagery with Orion Thermal Protection System (TPS) damage. Using the Microsoft Excel CRITBINOM function and Goal Seek, the probabilities of detection of damage to different shuttle tiles were calculated as shown in Table 3. Using developed measuring tools, volume and area measurements will be created from 3D models of Orion TPS damage. Last, mathematical expertise was provided to the Photogrammetry Team. These mathematical tasks consisted of developing elegant image space error equations for observations along 3D lines, circles, planes, etc. and checking proofs for minimal sets of sufficient multi-linear constraints. Some of the processes and resulting equations are displayed in Figure 1.

  5. Single aflatoxin contaminated corn kernel analysis with fluorescence hyperspectral image

    NASA Astrophysics Data System (ADS)

    Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Ononye, Ambrose; Brown, Robert L.; Cleveland, Thomas E.

    2010-04-01

    Aflatoxins are toxic secondary metabolites of the fungi Aspergillus flavus and Aspergillus parasiticus, among others. Aflatoxin contaminated corn is toxic to domestic animals when ingested in feed and is a known carcinogen associated with liver and lung cancer in humans. Consequently, aflatoxin levels in food and feed are regulated by the Food and Drug Administration (FDA) in the US, allowing 20 ppb (parts per billion) limits in food and 100 ppb in feed for interstate commerce. Currently, aflatoxin detection and quantification methods are based on analytical tests including thin-layer chromatography (TCL) and high performance liquid chromatography (HPLC). These analytical tests require the destruction of samples, and are costly and time consuming. Thus, the ability to detect aflatoxin in a rapid, nondestructive way is crucial to the grain industry, particularly to corn industry. Hyperspectral imaging technology offers a non-invasive approach toward screening for food safety inspection and quality control based on its spectral signature. The focus of this paper is to classify aflatoxin contaminated single corn kernels using fluorescence hyperspectral imagery. Field inoculated corn kernels were used in the study. Contaminated and control kernels under long wavelength ultraviolet excitation were imaged using a visible near-infrared (VNIR) hyperspectral camera. The imaged kernels were chemically analyzed to provide reference information for image analysis. This paper describes a procedure to process corn kernels located in different images for statistical training and classification. Two classification algorithms, Maximum Likelihood and Binary Encoding, were used to classify each corn kernel into "control" or "contaminated" through pixel classification. The Binary Encoding approach had a slightly better performance with accuracy equals to 87% or 88% when 20 ppb or 100 ppb was used as classification threshold, respectively.

  6. Analysis and interpretation of imaging mass spectrometry data by clustering mass-to-charge images according to their spatial similarity.

    PubMed

    Alexandrov, Theodore; Chernyavsky, Ilya; Becker, Michael; von Eggeling, Ferdinand; Nikolenko, Sergey

    2013-12-01

    Imaging mass spectrometry (imaging MS) has emerged in the past decade as a label-free, spatially resolved, and multipurpose bioanalytical technique for direct analysis of biological samples from animal tissue, plant tissue, biofilms, and polymer films. Imaging MS has been successfully incorporated into many biomedical pipelines where it is usually applied in the so-called untargeted mode-capturing spatial localization of a multitude of ions from a wide mass range.3 An imaging MS data set usually comprises thousands of spectra and tens to hundreds of thousands of mass-to-charge (m/z) images and can be as large as several gigabytes. Unsupervised analysis of an imaging MS data set aims at finding hidden structures in the data with no a priori information used and is often exploited as the first step of imaging MS data analysis. We propose a novel, easy-to-use and easy-to-implement approach to answer one of the key questions of unsupervised analysis of imaging MS data: what do all m/z images look like? The key idea of the approach is to cluster all m/z images according to their spatial similarity so that each cluster contains spatially similar m/z images. We propose a visualization of both spatial and spectral information obtained using clustering that provides an easy way to understand what all m/z images look like. We evaluated the proposed approach on matrix-assisted laser desorption ionization imaging MS data sets of a rat brain coronal section and human larynx carcinoma and discussed several scenarios of data analysis. PMID:24180335

  7. Does thorax EIT image analysis depend on the image reconstruction method?

    NASA Astrophysics Data System (ADS)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2013-04-01

    Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.

  8. Triangulation Based 3D Laser Imaging for Fracture Orientation Analysis

    NASA Astrophysics Data System (ADS)

    Mah, J.; Claire, S.; Steve, M.

    2009-05-01

    Laser imaging has recently been identified as a potential tool for rock mass characterization. This contribution focuses on the application of triangulation based, short-range laser imaging to determine fracture orientation and surface texture. This technology measures the distance to the target by triangulating the projected and reflected laser beams, and also records the reflection intensity. In this study, we acquired 3D laser images of rock faces using the Laser Camera System (LCS), a portable instrument developed by Neptec Design Group (Ottawa, Canada). The LCS uses an infrared laser beam and is immune to the lighting conditions. The maximum image resolution is 1024 x 1024 volumetric image elements. Depth resolution is 0.5 mm at 5 m. An above ground field trial was conducted at a blocky road cut with well defined joint sets (Kingston, Ontario). An underground field trial was conducted at the Inco 175 Ore body (Sudbury, Ontario) where images were acquired in the dark and the joint set features were more subtle. At each site, from a distance of 3 m away from the rock face, a grid of six images (approximately 1.6 m by 1.6 m) was acquired at maximum resolution with 20% overlap between adjacent images. This corresponds to a density of 40 image elements per square centimeter. Polyworks, a high density 3D visualization software tool, was used to align and merge the images into a single digital triangular mesh. The conventional method of determining fracture orientations is by manual measurement using a compass. In order to be accepted as a substitute for this method, the LCS should be capable of performing at least to the capabilities of manual measurements. To compare fracture orientation estimates derived from the 3D laser images to manual measurements, 160 inclinometer readings were taken at the above ground site. Three prominent joint sets (strike/dip: 236/09, 321/89, 325/01) were identified by plotting the joint poles on a stereonet. Underground, two main joint

  9. Wavelet Analysis of SAR Images for Coastal Monitoring

    NASA Technical Reports Server (NTRS)

    Liu, Antony K.; Wu, Sunny Y.; Tseng, William Y.; Pichel, William G.

    1998-01-01

    The mapping of mesoscale ocean features in the coastal zone is a major potential application for satellite data. The evolution of mesoscale features such as oil slicks, fronts, eddies, and ice edge can be tracked by the wavelet analysis using satellite data from repeating paths. The wavelet transform has been applied to satellite images, such as those from Synthetic Aperture Radar (SAR), Advanced Very High-Resolution Radiometer (AVHRR), and ocean color sensor for feature extraction. In this paper, algorithms and techniques for automated detection and tracking of mesoscale features from satellite SAR imagery employing wavelet analysis have been developed. Case studies on two major coastal oil spills have been investigated using wavelet analysis for tracking along the coast of Uruguay (February 1997), and near Point Barrow, Alaska (November 1997). Comparison of SAR images with SeaWiFS (Sea-viewing Wide Field-of-view Sensor) data for coccolithophore bloom in the East Bering Sea during the fall of 1997 shows a good match on bloom boundary. This paper demonstrates that this technique is a useful and promising tool for monitoring of coastal waters.

  10. Multispectral imaging for digital painting analysis: a Gauguin case study

    NASA Astrophysics Data System (ADS)

    Cornelis, Bruno; Dooms, Ann; Leen, Frederik; Munteanu, Adrian; Schelkens, Peter

    2010-08-01

    This paper is an introduction into the analysis of multispectral recordings of paintings. First, we will give an overview of the advantages of multispectral image analysis over more traditional techniques: first of all, the bands residing in the visible domain provide an accurate measurement of the color information which can be used for analysis but also for conservational and archival purposes (i.e. preserving the art patrimonial by making a digital library). Secondly, inspection of the multispectral imagery by art experts and art conservators has shown that combining the information present in the spectral bands residing in- and outside the visible domain can lead to a richer analysis of paintings. In the remainder of the paper, practical applications of multispectral analysis are demonstrated, where we consider the acquisition of thirteen different, high resolution spectral bands. Nine of these reside in the visible domain, one in the near ultraviolet and three in the infrared. The paper will illustrate the promising future of multispectral analysis as a non-invasive tool for acquiring data which cannot be acquired by visual inspection alone and which is highly relevant to art preservation, authentication and restoration. The demonstrated applications include detection of restored areas and detection of aging cracks.

  11. Short-range Gravity experiment using digital image analysis

    NASA Astrophysics Data System (ADS)

    Ninomiya, Kazufumi; Kishi, Reiko; Murakami, Haruna; Nishio, Hironori; Ogawa, Naruya; Taketani, Atsushi; Murata, Jiro

    2013-08-01

    According to a large extra dimension model, a deviation from Newton's inverse square law is expected at below sub-millimeter range. We have developed an experimental set up using a torsion balance pendulum and an online digital-image analysis system, aiming to test the Newtonian inverse square law at below millimeter scale. In addition, composition dependence of gravitational constant G is also tested at a millimeter scale, motivated to test the weak equivalence principle. In this paper, current status and results are described.

  12. Scanning tunneling microscopy on rough surfaces-quantitative image analysis

    NASA Astrophysics Data System (ADS)

    Reiss, G.; Brückl, H.; Vancea, J.; Lecheler, R.; Hastreiter, E.

    1991-07-01

    In this communication, the application of scanning tunneling microscopy (STM) for a quantitative evaluation of roughnesses and mean island sizes of polycrystalline thin films is discussed. Provided strong conditions concerning the resolution are satisfied, the results are in good agreement with standard techniques as, for example, transmission electron microscopy. Owing to its high resolution, STM can supply a better characterization of surfaces than established methods, especially concerning the roughness. Microscopic interpretations of surface dependent physical properties thus can be considerably improved by a quantitative analysis of STM images.

  13. Multifractal analysis of dynamic infrared imaging of breast cancer

    NASA Astrophysics Data System (ADS)

    Gerasimova, E.; Audit, B.; Roux, S. G.; Khalil, A.; Argoul, F.; Naimark, O.; Arneodo, A.

    2013-12-01

    The wavelet transform modulus maxima (WTMM) method was used in a multifractal analysis of skin breast temperature time-series recorded using dynamic infrared (IR) thermography. Multifractal scaling was found for healthy breasts as the signature of a continuous change in the shape of the probability density function (pdf) of temperature fluctuations across time scales from \\sim0.3 to 3 s. In contrast, temperature time-series from breasts with malignant tumors showed homogeneous monofractal temperature fluctuations statistics. These results highlight dynamic IR imaging as a very valuable non-invasive technique for preliminary screening in asymptomatic women to identify those with risk of breast cancer.

  14. Automation of the Image Analysis for Thermographic Inspection

    NASA Technical Reports Server (NTRS)

    Plotnikov, Yuri A.; Winfree, William P.

    1998-01-01

    Several data processing procedures for the pulse thermal inspection require preliminary determination of an unflawed region. Typically, an initial analysis of the thermal images is performed by an operator to determine the locations of unflawed and the defective areas. In the present work an algorithm is developed for automatically determining a reference point corresponding to an unflawed region. Results are obtained for defects which are arbitrarily located in the inspection region. A comparison is presented of the distributions of derived values with right and wrong localization of the reference point. Different algorithms of automatic determination of the reference point are compared.

  15. Content-addressable read/write memories for image analysis

    NASA Technical Reports Server (NTRS)

    Snyder, W. E.; Savage, C. D.

    1982-01-01

    The commonly encountered image analysis problems of region labeling and clustering are found to be cases of search-and-rename problem which can be solved in parallel by a system architecture that is inherently suitable for VLSI implementation. This architecture is a novel form of content-addressable memory (CAM) which provides parallel search and update functions, allowing speed reductions down to constant time per operation. It has been proposed in related investigations by Hall (1981) that, with VLSI, CAM-based structures with enhanced instruction sets for general purpose processing will be feasible.

  16. Color image digitization and analysis for drum inspection

    SciTech Connect

    Muller, R.C.; Armstrong, G.A.; Burks, B.L.; Kress, R.L.; Heckendorn, F.M.; Ward, C.R.

    1993-05-01

    A rust inspection system that uses color analysis to find rust spots on drums has been developed. The system is composed of high-resolution color video equipment that permits the inspection of rust spots on the order of 0.25 cm (0.1-in.) in diameter. Because of the modular nature of the system design, the use of open systems software (X11, etc.), the inspection system can be easily integrated into other environmental restoration and waste management programs. The inspection system represents an excellent platform for the integration of other color inspection and color image processing algorithms.

  17. Magnetic resonance imaging in laboratory petrophysical core analysis

    NASA Astrophysics Data System (ADS)

    Mitchell, J.; Chandrasekera, T. C.; Holland, D. J.; Gladden, L. F.; Fordham, E. J.

    2013-05-01

    Magnetic resonance imaging (MRI) is a well-known technique in medical diagnosis and materials science. In the more specialized arena of laboratory-scale petrophysical rock core analysis, the role of MRI has undergone a substantial change in focus over the last three decades. Initially, alongside the continual drive to exploit higher magnetic field strengths in MRI applications for medicine and chemistry, the same trend was followed in core analysis. However, the spatial resolution achievable in heterogeneous porous media is inherently limited due to the magnetic susceptibility contrast between solid and fluid. As a result, imaging resolution at the length-scale of typical pore diameters is not practical and so MRI of core-plugs has often been viewed as an inappropriate use of expensive magnetic resonance facilities. Recently, there has been a paradigm shift in the use of MRI in laboratory-scale core analysis. The focus is now on acquiring data in the laboratory that are directly comparable to data obtained from magnetic resonance well-logging tools (i.e., a common physics of measurement). To maintain consistency with well-logging instrumentation, it is desirable to measure distributions of transverse (T2) relaxation time-the industry-standard metric in well-logging-at the laboratory-scale. These T2 distributions can be spatially resolved over the length of a core-plug. The use of low-field magnets in the laboratory environment is optimal for core analysis not only because the magnetic field strength is closer to that of well-logging tools, but also because the magnetic susceptibility contrast is minimized, allowing the acquisition of quantitative image voxel (or pixel) intensities that are directly scalable to liquid volume. Beyond simple determination of macroscopic rock heterogeneity, it is possible to utilize the spatial resolution for monitoring forced displacement of oil by water or chemical agents, determining capillary pressure curves, and estimating

  18. Quantitative assessment of the impact of biomedical image acquisition on the results obtained from image analysis and processing

    PubMed Central

    2014-01-01

    Introduction Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator’s (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. Material and method The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. Results For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient’s back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects – error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18

  19. Image Analysis to Estimate Mulch Residue in Soil

    PubMed Central

    Moreno, Carmen; Mancebo, Ignacio; Saa, Antonio; Moreno, Marta M.

    2014-01-01

    Mulching is used to improve the condition of agricultural soils by covering the soil with different materials, mainly black polyethylene (PE). However, problems derived from its use are how to remove it from the field and, in the case of it remaining in the soil, the possible effects on it. One possible solution is to use biodegradable plastic (BD) or paper (PP), as mulch, which could present an alternative, reducing nonrecyclable waste and decreasing the environmental pollution associated with it. Determination of mulch residues in the ground is one of the basic requirements to estimate the potential of each material to degrade. This study has the goal of evaluating the residue of several mulch materials over a crop campaign in Central Spain through image analysis. Color images were acquired under similar lighting conditions at the experimental field. Different thresholding methods were applied to binarize the histogram values of the image saturation plane in order to show the best contrast between soil and mulch. Then the percentage of white pixels (i.e., soil area) was used to calculate the mulch deterioration. A comparison of thresholding methods and the different mulch materials based on percentage of bare soil area obtained is shown. PMID:25309953

  20. Biomedical image analysis using Markov random fields & efficient linear programing.

    PubMed

    Komodakis, Nikos; Besbes, Ahmed; Glocker, Ben; Paragios, Nikos

    2009-01-01

    Computer-aided diagnosis through biomedical image analysis is increasingly considered in health sciences. This is due to the progress made on the acquisition side, as well as on the processing one. In vivo visualization of human tissues where one can determine both anatomical and functional information is now possible. The use of these images with efficient intelligent mathematical and processing tools allows the interpretation of the tissues state and facilitates the task of the physicians. Segmentation and registration are the two most fundamental tools in bioimaging. The first aims to provide automatic tools for organ delineation from images, while the second focuses on establishing correspondences between observations inter and intra subject and modalities. In this paper, we present some recent results towards a common formulation addressing these problems, called the Markov Random Fields. Such an approach is modular with respect to the application context, can be easily extended to deal with various modalities, provides guarantees on the optimality properties of the obtained solution and is computationally efficient. PMID:19963682