Sample records for analyzer-based imaging technique

  1. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  2. The Design and Development of Test Platform for Wheat Precision Seeding Based on Image Processing Techniques

    NASA Astrophysics Data System (ADS)

    Li, Qing; Lin, Haibo; Xiu, Yu-Feng; Wang, Ruixue; Yi, Chuijie

    The test platform of wheat precision seeding based on image processing techniques is designed to develop the wheat precision seed metering device with high efficiency and precision. Using image processing techniques, this platform gathers images of seeds (wheat) on the conveyer belt which are falling from seed metering device. Then these data are processed and analyzed to calculate the qualified rate, reseeding rate and leakage sowing rate, etc. This paper introduces the whole structure, design parameters of the platform and hardware & software of the image acquisition system were introduced, as well as the method of seed identification and seed-space measurement using image's threshold and counting the seed's center. By analyzing the experimental result, the measurement error is less than ± 1mm.

  3. Visualization of ultrasound induced cavitation bubbles using the synchrotron x-ray Analyzer Based Imaging technique.

    PubMed

    Izadifar, Zahra; Belev, George; Izadifar, Mohammad; Izadifar, Zohreh; Chapman, Dean

    2014-12-07

    Observing cavitation bubbles deep within tissue is very difficult. The development of a method for probing cavitation, irrespective of its location in tissues, would improve the efficiency and application of ultrasound in the clinic. A synchrotron x-ray imaging technique, which is capable of detecting cavitation bubbles induced in water by a sonochemistry system, is reported here; this could possibly be extended to the study of therapeutic ultrasound in tissues. The two different x-ray imaging techniques of Analyzer Based Imaging (ABI) and phase contrast imaging (PCI) were examined in order to detect ultrasound induced cavitation bubbles. Cavitation was not observed by PCI, however it was detectable with ABI. Acoustic cavitation was imaged at six different acoustic power levels and six different locations through the acoustic beam in water at a fixed power level. The results indicate the potential utility of this technique for cavitation studies in tissues, but it is time consuming. This may be improved by optimizing the imaging method.

  4. A comparative study on preprocessing techniques in diabetic retinopathy retinal images: illumination correction and contrast enhancement.

    PubMed

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.

  5. Biometric image enhancement using decision rule based image fusion techniques

    NASA Astrophysics Data System (ADS)

    Sagayee, G. Mary Amirtha; Arumugam, S.

    2010-02-01

    Introducing biometrics into information systems may result in considerable benefits. Most of the researchers confirmed that the finger print is widely used than the iris or face and more over it is the primary choice for most privacy concerned applications. For finger prints applications, choosing proper sensor is at risk. The proposed work deals about, how the image quality can be improved by introducing image fusion technique at sensor levels. The results of the images after introducing the decision rule based image fusion technique are evaluated and analyzed with its entropy levels and root mean square error.

  6. A Comparative Study on Preprocessing Techniques in Diabetic Retinopathy Retinal Images: Illumination Correction and Contrast Enhancement

    PubMed Central

    Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza

    2015-01-01

    To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940

  7. Real-time color image processing for forensic fiber investigations

    NASA Astrophysics Data System (ADS)

    Paulsson, Nils

    1995-09-01

    This paper describes a system for automatic fiber debris detection based on color identification. The properties of the system are fast analysis and high selectivity, a necessity when analyzing forensic fiber samples. An ordinary investigation separates the material into well above 100,000 video images to analyze. The system is based on standard techniques such as CCD-camera, motorized sample table, and IBM-compatible PC/AT with add-on-boards for video frame digitalization and stepping motor control as the main parts. It is possible to operate the instrument at full video rate (25 image/s) with aid of the HSI-color system (hue- saturation-intensity) and software optimization. High selectivity is achieved by separating the analysis into several steps. The first step is fast direct color identification of objects in the analyzed video images and the second step analyzes detected objects with a more complex and time consuming stage of the investigation to identify single fiber fragments for subsequent analysis with more selective techniques.

  8. Noise and analyzer-crystal angular position analysis for analyzer-based phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-04-01

    The analyzer-based phase-contrast x-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile of the x-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this paper is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the multiple-image radiography, diffraction enhanced imaging and scatter diffraction enhanced imaging estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique.

  9. Noise and Analyzer-Crystal Angular Position Analysis for Analyzer-Based Phase-Contrast Imaging

    PubMed Central

    Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-01-01

    The analyzer-based phase-contrast X-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile (AIP) of the X-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this manuscript is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI) estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique. PMID:24651402

  10. Development of an x-ray prism for analyzer based imaging systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bewer, Brian; Chapman, Dean

    Analyzer crystal based imaging techniques such as diffraction enhanced imaging (DEI) and multiple imaging radiography (MIR) utilize the Bragg peak of perfect crystal diffraction to convert angular changes into intensity changes. These x-ray techniques extend the capability of conventional radiography, which derives image contrast from absorption, by providing large intensity changes for small angle changes introduced from the x-ray beam traversing the sample. Objects that have very little absorption contrast may have considerable refraction and ultrasmall angle x-ray scattering contrast improving visualization and extending the utility of x-ray imaging. To improve on the current DEI technique an x-ray prism (XRP)more » was designed and included in the imaging system. The XRP allows the analyzer crystal to be aligned anywhere on the rocking curve without physically moving the analyzer from the Bragg angle. By using the XRP to set the rocking curve alignment rather than moving the analyzer crystal physically the needed angle sensitivity is changed from submicroradians for direct mechanical movement of the analyzer crystal to tens of milliradians for movement of the XRP angle. However, this improvement in angle positioning comes at the cost of absorption loss in the XRP and depends on the x-ray energy. In addition to using an XRP for crystal alignment it has the potential for scanning quickly through the entire rocking curve. This has the benefit of collecting all the required data for image reconstruction in a single measurement thereby removing some problems with motion artifacts which remain a concern in current DEI/MIR systems especially for living animals.« less

  11. Development of an x-ray prism for analyzer based imaging systems

    NASA Astrophysics Data System (ADS)

    Bewer, Brian; Chapman, Dean

    2010-08-01

    Analyzer crystal based imaging techniques such as diffraction enhanced imaging (DEI) and multiple imaging radiography (MIR) utilize the Bragg peak of perfect crystal diffraction to convert angular changes into intensity changes. These x-ray techniques extend the capability of conventional radiography, which derives image contrast from absorption, by providing large intensity changes for small angle changes introduced from the x-ray beam traversing the sample. Objects that have very little absorption contrast may have considerable refraction and ultrasmall angle x-ray scattering contrast improving visualization and extending the utility of x-ray imaging. To improve on the current DEI technique an x-ray prism (XRP) was designed and included in the imaging system. The XRP allows the analyzer crystal to be aligned anywhere on the rocking curve without physically moving the analyzer from the Bragg angle. By using the XRP to set the rocking curve alignment rather than moving the analyzer crystal physically the needed angle sensitivity is changed from submicroradians for direct mechanical movement of the analyzer crystal to tens of milliradians for movement of the XRP angle. However, this improvement in angle positioning comes at the cost of absorption loss in the XRP and depends on the x-ray energy. In addition to using an XRP for crystal alignment it has the potential for scanning quickly through the entire rocking curve. This has the benefit of collecting all the required data for image reconstruction in a single measurement thereby removing some problems with motion artifacts which remain a concern in current DEI/MIR systems especially for living animals.

  12. Development of an x-ray prism for analyzer based imaging systems.

    PubMed

    Bewer, Brian; Chapman, Dean

    2010-08-01

    Analyzer crystal based imaging techniques such as diffraction enhanced imaging (DEI) and multiple imaging radiography (MIR) utilize the Bragg peak of perfect crystal diffraction to convert angular changes into intensity changes. These x-ray techniques extend the capability of conventional radiography, which derives image contrast from absorption, by providing large intensity changes for small angle changes introduced from the x-ray beam traversing the sample. Objects that have very little absorption contrast may have considerable refraction and ultrasmall angle x-ray scattering contrast improving visualization and extending the utility of x-ray imaging. To improve on the current DEI technique an x-ray prism (XRP) was designed and included in the imaging system. The XRP allows the analyzer crystal to be aligned anywhere on the rocking curve without physically moving the analyzer from the Bragg angle. By using the XRP to set the rocking curve alignment rather than moving the analyzer crystal physically the needed angle sensitivity is changed from submicroradians for direct mechanical movement of the analyzer crystal to tens of milliradians for movement of the XRP angle. However, this improvement in angle positioning comes at the cost of absorption loss in the XRP and depends on the x-ray energy. In addition to using an XRP for crystal alignment it has the potential for scanning quickly through the entire rocking curve. This has the benefit of collecting all the required data for image reconstruction in a single measurement thereby removing some problems with motion artifacts which remain a concern in current DEI/MIR systems especially for living animals.

  13. Simultaneous imaging of fat crystallinity and crystal polymorphic types by Raman microspectroscopy.

    PubMed

    Motoyama, Michiyo; Ando, Masahiro; Sasaki, Keisuke; Nakajima, Ikuyo; Chikuni, Koichi; Aikawa, Katsuhiro; Hamaguchi, Hiro-O

    2016-04-01

    The crystalline states of fats, i.e., the crystallinity and crystal polymorphic types, strongly influence their physical properties in fat-based foods. Imaging of fat crystalline states has thus been a subject of abiding interest, but conventional techniques cannot image crystallinity and polymorphic types all at once. This article demonstrates a new technique using Raman microspectroscopy for simultaneously imaging the crystallinity and polymorphic types of fats. The crystallinity and β' crystal polymorph, which contribute to the hardness of fat-based food products, were quantitatively visualized in a model fat (porcine adipose tissue) by analyzing several key Raman bands. The emergence of the β crystal polymorph, which generally results in food product deterioration, was successfully imaged by analyzing the whole fingerprint regions of Raman spectra using multivariate curve resolution alternating least squares analysis. The results demonstrate that the crystalline states of fats can be nondestructively visualized and analyzed at the molecular level, in situ, without laborious sample pretreatments. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Simulation of a compact analyzer-based imaging system with a regular x-ray source

    NASA Astrophysics Data System (ADS)

    Caudevilla, Oriol; Zhou, Wei; Stoupin, Stanislav; Verman, Boris; Brankov, J. G.

    2017-03-01

    Analyzer-based Imaging (ABI) belongs to a broader family of phase-contrast (PC) X-ray techniques. PC measures X-ray deflection phenomena when interacting with a sample, which is known to provide higher contrast images of soft tissue than other X-ray methods. This is of high interest in the medical field, in particular for mammogram applications. This paper presents a simulation tool for table-top ABI systems using a conventional polychromatic X-ray source.

  15. X-ray Phase Contrast Allows Three Dimensional, Quantitative Imaging of Hydrogel Implants

    PubMed Central

    Appel, Alyssa A.; Larson, Jeffery C.; Jiang, Bin; Zhong, Zhong; Anastasio, Mark A.; Brey, Eric M.

    2015-01-01

    Three dimensional imaging techniques are needed for the evaluation and assessment of biomaterials used for tissue engineering and drug delivery applications. Hydrogels are a particularly popular class of materials for medical applications but are difficult to image in tissue using most available imaging modalities. Imaging techniques based on X-ray Phase Contrast (XPC) have shown promise for tissue engineering applications due to their ability to provide image contrast based on multiple X-ray properties. In this manuscript, we investigate the use of XPC for imaging a model hydrogel and soft tissue structure. Porous fibrin loaded poly(ethylene glycol) hydrogels were synthesized and implanted in a rodent subcutaneous model. Samples were explanted and imaged with an analyzer-based XPC technique and processed and stained for histology for comparison. Both hydrogel and soft tissues structures could be identified in XPC images. Structure in skeletal muscle adjacent could be visualized and invading fibrovascular tissue could be quantified. There were no differences between invading tissue measurements from XPC and the gold-standard histology. These results provide evidence of the significant potential of techniques based on XPC for 3D imaging of hydrogel structure and local tissue response. PMID:26487123

  16. X-ray Phase Contrast Allows Three Dimensional, Quantitative Imaging of Hydrogel Implants

    DOE PAGES

    Appel, Alyssa A.; Larson, Jeffrey C.; Jiang, Bin; ...

    2015-10-20

    Three dimensional imaging techniques are needed for the evaluation and assessment of biomaterials used for tissue engineering and drug delivery applications. Hydrogels are a particularly popular class of materials for medical applications but are difficult to image in tissue using most available imaging modalities. Imaging techniques based on X-ray Phase Contrast (XPC) have shown promise for tissue engineering applications due to their ability to provide image contrast based on multiple X-ray properties. In this manuscript we describe results using XPC to image a model hydrogel and soft tissue structure. Porous fibrin loaded poly(ethylene glycol) hydrogels were synthesized and implanted inmore » a rodent subcutaneous model. Samples were explanted and imaged with an analyzer-based XPC technique and processed and stained for histology for comparison. Both hydrogel and soft tissues structures could be identified in XPC images. Structure in skeletal muscle adjacent could be visualized and invading fibrovascular tissue could be quantified. In quantitative results, there were no differences between XPC and the gold-standard histological measurements. These results provide evidence of the significant potential of techniques based on XPC for 3D imaging of hydrogel structure and local tissue response.« less

  17. Image Edge Extraction via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A. (Inventor); Klinko, Steve (Inventor)

    2008-01-01

    A computer-based technique for detecting edges in gray level digital images employs fuzzy reasoning to analyze whether each pixel in an image is likely on an edge. The image is analyzed on a pixel-by-pixel basis by analyzing gradient levels of pixels in a square window surrounding the pixel being analyzed. An edge path passing through the pixel having the greatest intensity gradient is used as input to a fuzzy membership function, which employs fuzzy singletons and inference rules to assigns a new gray level value to the pixel that is related to the pixel's edginess degree.

  18. Development of an X-ray prism for a combined diffraction enhanced imaging and fluorescence imaging system

    NASA Astrophysics Data System (ADS)

    Bewer, Brian E.

    Analyzer crystal based imaging techniques such as diffraction enhanced imaging (DEI) and multiple imaging radiography (MIR) utilize the Bragg peak of perfect crystal diffraction to convert angular changes into intensity changes. These X-ray techniques extend the capability of conventional radiography, which derives image contrast from absorption, by providing a large change in intensity for a small angle change introduced by the X-ray beam traversing the sample. Objects that have very little absorption contrast may have considerable refraction and ultra small angle X-ray scattering (USAXS) contrast thus improving visualization and extending the utility of X-ray imaging. To improve on the current DEI technique this body of work describes the design of an X-ray prism (XRP) included in the imaging system which allows the analyzer crystal to be aligned anywhere on the rocking curve without moving the analyzer from the Bragg angle. By using the XRP to set the rocking curve alignment rather than moving the analyzer crystal physically the needed angle sensitivity is changed from muradians for direct mechanical movement of the analyzer crystal to milliradian control for movement the XRP angle. In addition to using an XRP for the traditional DEI acquisition method of two scans on opposite sides of the rocking curve preliminary tests will be presented showing the potential of using an XRP to scan quickly through the entire rocking curve. This has the benefit of collecting all the required data for image reconstruction in a single fast measurement thus removing the occurrence of motion artifacts for each point or line used during a scan. The XRP design is also intended to be compatible with combined imaging systems where more than one technique is used to investigate a sample. Candidates for complimentary techniques are investigated and measurements from a combined X-ray imaging system are presented.

  19. Infrared thermal imaging of atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Watt, David; Mchugh, John

    1990-01-01

    A technique for analyzing infrared atmospheric images to obtain cross-wind measurement is presented. The technique is based on Taylor's frozen turbulence hypothesis and uses cross-correlation of successive images to obtain a measure of the cross-wind velocity in a localized focal region. The technique is appealing because it can possibly be combined with other IR forward look capabilities and may provide information about turbulence intensity. The current research effort, its theoretical basis, and its applicability to windshear detection are described.

  20. A Query Expansion Framework in Image Retrieval Domain Based on Local and Global Analysis

    PubMed Central

    Rahman, M. M.; Antani, S. K.; Thoma, G. R.

    2011-01-01

    We present an image retrieval framework based on automatic query expansion in a concept feature space by generalizing the vector space model of information retrieval. In this framework, images are represented by vectors of weighted concepts similar to the keyword-based representation used in text retrieval. To generate the concept vocabularies, a statistical model is built by utilizing Support Vector Machine (SVM)-based classification techniques. The images are represented as “bag of concepts” that comprise perceptually and/or semantically distinguishable color and texture patches from local image regions in a multi-dimensional feature space. To explore the correlation between the concepts and overcome the assumption of feature independence in this model, we propose query expansion techniques in the image domain from a new perspective based on both local and global analysis. For the local analysis, the correlations between the concepts based on the co-occurrence pattern, and the metrical constraints based on the neighborhood proximity between the concepts in encoded images, are analyzed by considering local feedback information. We also analyze the concept similarities in the collection as a whole in the form of a similarity thesaurus and propose an efficient query expansion based on the global analysis. The experimental results on a photographic collection of natural scenes and a biomedical database of different imaging modalities demonstrate the effectiveness of the proposed framework in terms of precision and recall. PMID:21822350

  1. Breast tumor segmentation in high resolution x-ray phase contrast analyzer based computed tomography.

    PubMed

    Brun, E; Grandl, S; Sztrókay-Gaul, A; Barbone, G; Mittone, A; Gasilov, S; Bravin, A; Coan, P

    2014-11-01

    Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer based phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure's possible applications. A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.

  2. Handheld Fluorescence Microscopy based Flow Analyzer.

    PubMed

    Saxena, Manish; Jayakumar, Nitin; Gorthi, Sai Siva

    2016-03-01

    Fluorescence microscopy has the intrinsic advantages of favourable contrast characteristics and high degree of specificity. Consequently, it has been a mainstay in modern biological inquiry and clinical diagnostics. Despite its reliable nature, fluorescence based clinical microscopy and diagnostics is a manual, labour intensive and time consuming procedure. The article outlines a cost-effective, high throughput alternative to conventional fluorescence imaging techniques. With system level integration of custom-designed microfluidics and optics, we demonstrate fluorescence microscopy based imaging flow analyzer. Using this system we have imaged more than 2900 FITC labeled fluorescent beads per minute. This demonstrates high-throughput characteristics of our flow analyzer in comparison to conventional fluorescence microscopy. The issue of motion blur at high flow rates limits the achievable throughput in image based flow analyzers. Here we address the issue by computationally deblurring the images and show that this restores the morphological features otherwise affected by motion blur. By further optimizing concentration of the sample solution and flow speeds, along with imaging multiple channels simultaneously, the system is capable of providing throughput of about 480 beads per second.

  3. Image analysis for quantification of bacterial rock weathering.

    PubMed

    Puente, M Esther; Rodriguez-Jaramillo, M Carmen; Li, Ching Y; Bashan, Yoav

    2006-02-01

    A fast, quantitative image analysis technique was developed to assess potential rock weathering by bacteria. The technique is based on reduction in the surface area of rock particles and counting the relative increase in the number of small particles in ground rock slurries. This was done by recording changes in ground rock samples with an electronic image analyzing process. The slurries were previously amended with three carbon sources, ground to a uniform particle size and incubated with rock weathering bacteria for 28 days. The technique was developed and tested, using two rock-weathering bacteria Pseudomonas putida R-20 and Azospirillum brasilense Cd on marble, granite, apatite, quartz, limestone, and volcanic rock as substrates. The image analyzer processed large number of particles (10(7)-10(8) per sample), so that the weathering capacity of bacteria can be detected.

  4. Edge Preserved Speckle Noise Reduction Using Integrated Fuzzy Filters

    PubMed Central

    Dewal, M. L.; Rohit, Manoj Kumar

    2014-01-01

    Echocardiographic images are inherent with speckle noise which makes visual reading and analysis quite difficult. The multiplicative speckle noise masks finer details, necessary for diagnosis of abnormalities. A novel speckle reduction technique based on integration of geometric, wiener, and fuzzy filters is proposed and analyzed in this paper. The denoising applications of fuzzy filters are studied and analyzed along with 26 denoising techniques. It is observed that geometric filter retains noise and, to address this issue, wiener filter is embedded into the geometric filter during iteration process. The performance of geometric-wiener filter is further enhanced using fuzzy filters and the proposed despeckling techniques are called integrated fuzzy filters. Fuzzy filters based on moving average and median value are employed in the integrated fuzzy filters. The performances of integrated fuzzy filters are tested on echocardiographic images and synthetic images in terms of image quality metrics. It is observed that the performance parameters are highest in case of integrated fuzzy filters in comparison to fuzzy and geometric-fuzzy filters. The clinical validation reveals that the output images obtained using geometric-wiener, integrated fuzzy, nonlocal means, and details preserving anisotropic diffusion filters are acceptable. The necessary finer details are retained in the denoised echocardiographic images. PMID:27437499

  5. Gas temperature and density measurements based on spectrally resolved Rayleigh-Brillouin scattering

    NASA Technical Reports Server (NTRS)

    Seasholtz, Richard G.; Lock, James A.

    1992-01-01

    The use of molecular Rayleigh scattering for measurements of gas density and temperature is evaluated. The technique used is based on the measurement of the spectrum of the scattered light, where both temperature and density are determined from the spectral shape. Planar imaging of Rayleigh scattering from air using a laser light sheet is evaluated for ambient conditions. The Cramer-Rao lower bounds for the shot-noise limited density and temperature measurement uncertainties are calculated for an ideal optical spectrum analyzer and for a planar mirror Fabry-Perot interferometer used in a static, imaging mode. With this technique, a single image of the Rayleigh scattered light can be analyzed to obtain density (or pressure) and temperature. Experimental results are presented for planar measurements taken in a heated air stream.

  6. Wavelength-encoded tomography based on optical temporal Fourier transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chi; Wong, Kenneth K. Y., E-mail: kywong@eee.hku.hk

    We propose and demonstrate a technique called wavelength-encoded tomography (WET) for non-invasive optical cross-sectional imaging, particularly beneficial in biological system. The WET utilizes time-lens to perform the optical Fourier transform, and the time-to-wavelength conversion generates a wavelength-encoded image of optical scattering from internal microstructures, analogous to the interferometery-based imaging such as optical coherence tomography. Optical Fourier transform, in principle, comes with twice as good axial resolution over the electrical Fourier transform, and will greatly simplify the digital signal processing after the data acquisition. As a proof-of-principle demonstration, a 150 -μm (ideally 36 μm) resolution is achieved based on a 7.5-nm bandwidth swept-pump,more » using a conventional optical spectrum analyzer. This approach can potentially achieve up to 100-MHz or even higher frame rate with some proven ultrafast spectrum analyzer. We believe that this technique is innovative towards the next-generation ultrafast optical tomographic imaging application.« less

  7. Radar imaging using electromagnetic wave carrying orbital angular momentum

    NASA Astrophysics Data System (ADS)

    Yuan, Tiezhu; Cheng, Yongqiang; Wang, Hongqiang; Qin, Yuliang; Fan, Bo

    2017-03-01

    The concept of radar imaging based on orbital angular momentum (OAM) modulation, which has the ability of azimuthal resolution without relative motion, has recently been proposed. We investigate this imaging technique further in greater detail. We first analyze the principle of the technique, accounting for its resolving ability physically. The phase and intensity distributions of the OAM-carrying fields produced by phased uniform circular array antenna, which have significant effects on the imaging results, are investigated. The imaging model shows that the received signal has the form of inverse discrete Fourier transform with the use of OAM and frequency diversities. The two-dimensional Fourier transform is employed to reconstruct the target images in the case of large and small elevation angles. Due to the peculiar phase and intensity characteristics, the small elevation is more suitable for practical application than the large one. The minimum elevation angle is then obtained given the array parameters. The imaging capability is analyzed by means of the point spread function. All results are verified through numerical simulations. The proposed staring imaging technique can achieve extremely high azimuthal resolution with the use of plentiful OAM modes.

  8. Computer Analysis of Eye Blood-Vessel Images

    NASA Technical Reports Server (NTRS)

    Wall, R. J.; White, B. S.

    1984-01-01

    Technique rapidly diagnoses diabetes mellitus. Photographs of "whites" of patients' eyes scanned by computerized image analyzer programmed to quantify density of small blood vessels in conjuctiva. Comparison with data base of known normal and diabetic patients facilitates rapid diagnosis.

  9. Note: A portable Raman analyzer for microfluidic chips based on a dichroic beam splitter for integration of imaging and signal collection light paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Yijia; Xu, Shuping; Xu, Weiqing, E-mail: xuwq@jlu.edu.cn

    An integrated and portable Raman analyzer featuring an inverted probe fixed on a motor-driving adjustable optical module was designed for the combination of a microfluidic system. It possesses a micro-imaging function. The inverted configuration is advantageous to locate and focus microfluidic channels. Different from commercial micro-imaging Raman spectrometers using manual switchable light path, this analyzer adopts a dichroic beam splitter for both imaging and signal collection light paths, which avoids movable parts and improves the integration and stability of optics. Combined with surface-enhanced Raman scattering technique, this portable Raman micro-analyzer is promising as a powerful tool for microfluidic analytics.

  10. Restoration of moving binary images degraded owing to phosphor persistence.

    PubMed

    Cherri, A K; Awwal, A A; Karim, M A; Moon, D L

    1991-09-10

    The degraded images of dynamic objects obtained by using a phosphor-based electro-optical display are analyzed in terms of dynamic modulation transfer function (DMTF) and temporal characteristics of the display system. The direct correspondence between the DMTF and image smear is used in developing real-time techniques for the restoration of degraded images.

  11. Mathematical models used in segmentation and fractal methods of 2-D ultrasound images

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Moraru, Luminita; Bibicu, Dorin

    2012-11-01

    Mathematical models are widely used in biomedical computing. The extracted data from images using the mathematical techniques are the "pillar" achieving scientific progress in experimental, clinical, biomedical, and behavioural researches. This article deals with the representation of 2-D images and highlights the mathematical support for the segmentation operation and fractal analysis in ultrasound images. A large number of mathematical techniques are suitable to be applied during the image processing stage. The addressed topics cover the edge-based segmentation, more precisely the gradient-based edge detection and active contour model, and the region-based segmentation namely Otsu method. Another interesting mathematical approach consists of analyzing the images using the Box Counting Method (BCM) to compute the fractal dimension. The results of the paper provide explicit samples performed by various combination of methods.

  12. A three-image algorithm for hard x-ray grating interferometry.

    PubMed

    Pelliccia, Daniele; Rigon, Luigi; Arfelli, Fulvia; Menk, Ralf-Hendrik; Bukreeva, Inna; Cedola, Alessia

    2013-08-12

    A three-image method to extract absorption, refraction and scattering information for hard x-ray grating interferometry is presented. The method comprises a post-processing approach alternative to the conventional phase stepping procedure and is inspired by a similar three-image technique developed for analyzer-based x-ray imaging. Results obtained with this algorithm are quantitatively comparable with phase-stepping. This method can be further extended to samples with negligible scattering, where only two images are needed to separate absorption and refraction signal. Thanks to the limited number of images required, this technique is a viable route to bio-compatible imaging with x-ray grating interferometer. In addition our method elucidates and strengthens the formal and practical analogies between grating interferometry and the (non-interferometric) diffraction enhanced imaging technique.

  13. Analysis on unevenness of skin color using the melanin and hemoglobin components separated by independent component analysis of skin color image

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Fujiwara, Izumi; Inoue, Yayoi; Tsumura, Norimichi; Nakaguchi, Toshiya; Iwata, Kayoko

    2011-03-01

    Uneven distribution of skin color is one of the biggest concerns about facial skin appearance. Recently several techniques to analyze skin color have been introduced by separating skin color information into chromophore components, such as melanin and hemoglobin. However, there are not many reports on quantitative analysis of unevenness of skin color by considering type of chromophore, clusters of different sizes and concentration of the each chromophore. We propose a new image analysis and simulation method based on chromophore analysis and spatial frequency analysis. This method is mainly composed of three techniques: independent component analysis (ICA) to extract hemoglobin and melanin chromophores from a single skin color image, an image pyramid technique which decomposes each chromophore into multi-resolution images, which can be used for identifying different sizes of clusters or spatial frequencies, and analysis of the histogram obtained from each multi-resolution image to extract unevenness parameters. As the application of the method, we also introduce an image processing technique to change unevenness of melanin component. As the result, the method showed high capabilities to analyze unevenness of each skin chromophore: 1) Vague unevenness on skin could be discriminated from noticeable pigmentation such as freckles or acne. 2) By analyzing the unevenness parameters obtained from each multi-resolution image for Japanese ladies, agerelated changes were observed in the parameters of middle spatial frequency. 3) An image processing system modulating the parameters was proposed to change unevenness of skin images along the axis of the obtained age-related change in real time.

  14. Feedback mechanism for smart nozzles and nebulizers

    DOEpatents

    Montaser, Akbar [Potomac, MD; Jorabchi, Kaveh [Arlington, VA; Kahen, Kaveh [Kleinburg, CA

    2009-01-27

    Nozzles and nebulizers able to produce aerosol with optimum and reproducible quality based on feedback information obtained using laser imaging techniques. Two laser-based imaging techniques based on particle image velocimetry (PTV) and optical patternation map and contrast size and velocity distributions for indirect and direct pneumatic nebulizations in plasma spectrometry. Two pulses from thin laser sheet with known time difference illuminate droplets flow field. Charge coupled device (CCL)) captures scattering of laser light from droplets, providing two instantaneous particle images. Pointwise cross-correlation of corresponding images yields two-dimensional velocity map of aerosol velocity field. For droplet size distribution studies, solution is doped with fluorescent dye and both laser induced florescence (LIF) and Mie scattering images are captured simultaneously by two CCDs with the same field of view. Ratio of LIF/Mie images provides relative droplet size information, then scaled by point calibration method via phase Doppler particle analyzer.

  15. Novel permutation measures for image encryption algorithms

    NASA Astrophysics Data System (ADS)

    Abd-El-Hafiz, Salwa K.; AbdElHaleem, Sherif H.; Radwan, Ahmed G.

    2016-10-01

    This paper proposes two measures for the evaluation of permutation techniques used in image encryption. First, a general mathematical framework for describing the permutation phase used in image encryption is presented. Using this framework, six different permutation techniques, based on chaotic and non-chaotic generators, are described. The two new measures are, then, introduced to evaluate the effectiveness of permutation techniques. These measures are (1) Percentage of Adjacent Pixels Count (PAPC) and (2) Distance Between Adjacent Pixels (DBAP). The proposed measures are used to evaluate and compare the six permutation techniques in different scenarios. The permutation techniques are applied on several standard images and the resulting scrambled images are analyzed. Moreover, the new measures are used to compare the permutation algorithms on different matrix sizes irrespective of the actual parameters used in each algorithm. The analysis results show that the proposed measures are good indicators of the effectiveness of the permutation technique.

  16. Analysis of the Growth Process of Neural Cells in Culture Environment Using Image Processing Techniques

    NASA Astrophysics Data System (ADS)

    Mirsafianf, Atefeh S.; Isfahani, Shirin N.; Kasaei, Shohreh; Mobasheri, Hamid

    Here we present an approach for processing neural cells images to analyze their growth process in culture environment. We have applied several image processing techniques for: 1- Environmental noise reduction, 2- Neural cells segmentation, 3- Neural cells classification based on their dendrites' growth conditions, and 4- neurons' features Extraction and measurement (e.g., like cell body area, number of dendrites, axon's length, and so on). Due to the large amount of noise in the images, we have used feed forward artificial neural networks to detect edges more precisely.

  17. Analyzer-based imaging technique in tomography of cartilage and metal implants: a study at the ESRF

    PubMed Central

    COAN, Paola; MOLLENHAUER, Juergen; WAGNER, Andreas; Muehleman, Carol; BRAVIN, Alberto

    2009-01-01

    Monitoring the progression of osteoarthritis (OA) and the effects of therapy during clinical trials is still a challenge for present clinical imaging techniques since they present intrinsic limitations and can be sensitive only in case of advanced OA stages. In very severe cases, partial or complete joint replacement surgery is the only solution for reducing pain and restoring the joint functions. Poor imaging quality in practically all medical imaging technologies with respect to joint surfaces and to metal implant imaging calls for the development of new techniques that are sensitive to stages preceding the point of irreversible damage of the cartilage tissue. In this scenario, X-ray phase contrast modalities could play an important role since they can provide improved contrast compared to conventional absorption radiography, with a similar or even reduced tissue radiation dose. In this study, the Analyzer-based imaging (ABI), a technique sensitive to the X-ray refraction and permitting a high scatter rejection, has been successfully applied in-vitro on excised human synovial joints and sheep implants. Pathological and healthy joints as well as metal implants have been imaged in projection and computed tomography ABI mode at high resolution and clinically compatible doses (< 10 mGy). Volume rendering and segmentation permitted visualization of the cartilage from volumetric CT-scans. Results demonstrate that ABI can provide an unequivocal non-invasive diagnosis of the state of disease of the joint and be considered a new tool in orthopaedic research. PMID:18584983

  18. Automatic Feature Extraction from Planetary Images

    NASA Technical Reports Server (NTRS)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  19. An integrtated approach to the use of Landsat TM data for gold exploration in west central Nevada

    NASA Technical Reports Server (NTRS)

    Mouat, D. A.; Myers, J. S.; Miller, N. L.

    1987-01-01

    This paper represents an integration of several Landsat TM image processing techniques with other data to discriminate the lithologies and associated areas of hydrothermal alteration in the vicinity of the Paradise Peak gold mine in west central Nevada. A microprocessor-based image processing system and an IDIMS system were used to analyze data from a 512 X 512 window of a Landsat-5 TM scene collected on June 30, 1984. Image processing techniques included simple band composites, band ratio composites, principal components composites, and baseline-based composites. These techniques were chosen based on their ability to discriminate the spectral characteristics of the products of hydrothermal alteration as well as of the associated regional lithologies. The simple band composite, ratio composite, two principal components composites, and the baseline-based composites separately can define the principal areas of alteration. Combined, they provide a very powerful exploration tool.

  20. n-SIFT: n-dimensional scale invariant feature transform.

    PubMed

    Cheung, Warren; Hamarneh, Ghassan

    2009-09-01

    We propose the n-dimensional scale invariant feature transform (n-SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic 3D + time CT data.

  1. Image Processing for Binarization Enhancement via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A. (Inventor)

    2009-01-01

    A technique for enhancing a gray-scale image to improve conversions of the image to binary employs fuzzy reasoning. In the technique, pixels in the image are analyzed by comparing the pixel's gray scale value, which is indicative of its relative brightness, to the values of pixels immediately surrounding the selected pixel. The degree to which each pixel in the image differs in value from the values of surrounding pixels is employed as the variable in a fuzzy reasoning-based analysis that determines an appropriate amount by which the selected pixel's value should be adjusted to reduce vagueness and ambiguity in the image and improve retention of information during binarization of the enhanced gray-scale image.

  2. Through-wall image enhancement using fuzzy and QR decomposition.

    PubMed

    Riaz, Muhammad Mohsin; Ghafoor, Abdul

    2014-01-01

    QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.

  3. Automated radial basis function neural network based image classification system for diabetic retinopathy detection in retinal images

    NASA Astrophysics Data System (ADS)

    Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude

    2010-02-01

    Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.

  4. Moon Technology for Skin Care

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Estee Lauder uses digital image analyzer and software based on NASA lunar research in evaluation of cosmetic products for skincare. Digital image processing brings out subtleties otherwise undetectable, and allows better determination of product's effectiveness. Technique allows Estee Lauder to quantify changes in skin surface form and structure caused by application of cosmetic preparations.

  5. Application of analyzer based X-ray imaging technique for detection of ultrasound induced cavitation bubbles from a physical therapy unit.

    PubMed

    Izadifar, Zahra; Belev, George; Babyn, Paul; Chapman, Dean

    2015-10-19

    The observation of ultrasound generated cavitation bubbles deep in tissue is very difficult. The development of an imaging method capable of investigating cavitation bubbles in tissue would improve the efficiency and application of ultrasound in the clinic. Among the previous imaging modalities capable of detecting cavitation bubbles in vivo, the acoustic detection technique has the positive aspect of in vivo application. However the size of the initial cavitation bubble and the amplitude of the ultrasound that produced the cavitation bubbles, affect the timing and amplitude of the cavitation bubbles' emissions. The spatial distribution of cavitation bubbles, driven by 0.8835 MHz therapeutic ultrasound system at output power of 14 Watt, was studied in water using a synchrotron X-ray imaging technique, Analyzer Based Imaging (ABI). The cavitation bubble distribution was investigated by repeated application of the ultrasound and imaging the water tank. The spatial frequency of the cavitation bubble pattern was evaluated by Fourier analysis. Acoustic cavitation was imaged at four different locations through the acoustic beam in water at a fixed power level. The pattern of cavitation bubbles in water was detected by synchrotron X-ray ABI. The spatial distribution of cavitation bubbles driven by the therapeutic ultrasound system was observed using ABI X-ray imaging technique. It was observed that the cavitation bubbles appeared in a periodic pattern. The calculated distance between intervals revealed that the distance of frequent cavitation lines (intervals) is one-half of the acoustic wave length consistent with standing waves. This set of experiments demonstrates the utility of synchrotron ABI for visualizing cavitation bubbles formed in water by clinical ultrasound systems working at high frequency and output powers as low as a therapeutic system.

  6. Computer-aided light sheet flow visualization using photogrammetry

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1994-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.

  7. Computer-Aided Light Sheet Flow Visualization

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1993-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.

  8. Computer-aided light sheet flow visualization

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1993-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.

  9. Edge enhancement and noise suppression for infrared image based on feature analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Meng

    2018-06-01

    Infrared images are often suffering from background noise, blurred edges, few details and low signal-to-noise ratios. To improve infrared image quality, it is essential to suppress noise and enhance edges simultaneously. To realize it in this paper, we propose a novel algorithm based on feature analysis in shearlet domain. Firstly, as one of multi-scale geometric analysis (MGA), we introduce the theory and superiority of shearlet transform. Secondly, after analyzing the defects of traditional thresholding technique to suppress noise, we propose a novel feature extraction distinguishing image structures from noise well and use it to improve the traditional thresholding technique. Thirdly, with computing the correlations between neighboring shearlet coefficients, the feature attribute maps identifying the weak detail and strong edges are completed to improve the generalized unsharped masking (GUM). At last, experiment results with infrared images captured in different scenes demonstrate that the proposed algorithm suppresses noise efficiently and enhances image edges adaptively.

  10. Optical image encryption using fresnel zone plate mask based on fast walsh hadamard transform

    NASA Astrophysics Data System (ADS)

    Khurana, Mehak; Singh, Hukum

    2018-05-01

    A new symmetric encryption technique using Fresnel Zone Plate (FZP) based on Fast Walsh Hadamard Transform (FWHT) is proposed for security enhancement. In this technique, bits of plain image is randomized by shuffling the bits randomly. The obtained scrambled image is then masked with FZP using symmetric encryption in FWHT domain to obtain final encrypted image. FWHT has been used in the cryptosystem so as to protect image data from the quantization error and for reconstructing the image perfectly. The FZP used in proposed scheme increases the key space and makes it robust to many traditional attacks. The effectiveness and robustness of the proposed cryptosystem has been analyzed on the basis of various parameters by simulating on MATLAB 8.1.0 (R2012b). The experimental results are provided to highlight suitability of the proposed cryptosystem and prove that the system is secure.

  11. The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim

    2008-01-01

    Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.

  12. Image processing and analysis using neural networks for optometry area

    NASA Astrophysics Data System (ADS)

    Netto, Antonio V.; Ferreira de Oliveira, Maria C.

    2002-11-01

    In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.

  13. Ballistics projectile image analysis for firearm identification.

    PubMed

    Li, Dongguang

    2006-10-01

    This paper is based upon the observation that, when a bullet is fired, it creates characteristic markings on the cartridge case and projectile. From these markings, over 30 different features can be distinguished, which, in combination, produce a "fingerprint" for a firearm. By analyzing features within such a set of firearm fingerprints, it will be possible to identify not only the type and model of a firearm, but also each and every individual weapon just as effectively as human fingerprint identification. A new analytic system based on the fast Fourier transform for identifying projectile specimens by the line-scan imaging technique is proposed in this paper. This paper develops optical, photonic, and mechanical techniques to map the topography of the surfaces of forensic projectiles for the purpose of identification. Experiments discussed in this paper are performed on images acquired from 16 various weapons. Experimental results show that the proposed system can be used for firearm identification efficiently and precisely through digitizing and analyzing the fired projectiles specimens.

  14. In-vivo Fourier domain optical coherence tomography as a new tool for investigation of vasodynamics in the mouse model.

    PubMed

    Meissner, Sven; Müller, Gregor; Walther, Julia; Morawietz, Henning; Koch, Edmund

    2009-01-01

    In-vivo imaging of the vascular system can provide novel insight into the dynamics of vasoconstriction and vasodilation. Fourier domain optical coherence tomography (FD-OCT) is an optical, noncontact imaging technique based on interferometry of short-coherent near-infrared light with axial resolution of less than 10 microm. In this study, we apply FD-OCT as an in-vivo imaging technique to investigate blood vessels in their anatomical context using temporally resolved image stacks. Our chosen model system is the murine saphenous artery and vein, due to their small inner vessel diameters, sensitive response to vasoactive stimuli, and advantageous anatomical position. The vascular function of male wild-type mice (C57BL/6) is determined at the ages of 6 and 20 weeks. Vasoconstriction is analyzed in response to dermal application of potassium (K(+)), and vasodilation in response to sodium nitroprusside (SNP). Vasodynamics are quantified from time series (75 sec, 4 frames per sec, 330 x 512 pixels per frame) of cross sectional images that are analyzed by semiautomated image processing software. The morphology of the saphenous artery and vein is determined by 3-D image stacks of 512 x 512 x 512 pixels. Using the FD-OCT technique, we are able to demonstrate age-dependent differences in vascular function and vasodynamics.

  15. Automatic classification of minimally invasive instruments based on endoscopic image sequences

    NASA Astrophysics Data System (ADS)

    Speidel, Stefanie; Benzko, Julia; Krappe, Sebastian; Sudra, Gunther; Azad, Pedram; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger

    2009-02-01

    Minimally invasive surgery is nowadays a frequently applied technique and can be regarded as a major breakthrough in surgery. The surgeon has to adopt special operation-techniques and deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To analyze the current situation for context-aware assistance, we need intraoperatively gained sensor data and a model of the intervention. A situation consists of information about the performed activity, the used instruments, the surgical objects, the anatomical structures and defines the state of an intervention for a given moment in time. The endoscopic images provide a rich source of information which can be used for an image-based analysis. Different visual cues are observed in order to perform an image-based analysis with the objective to gain as much information as possible about the current situation. An important visual cue is the automatic recognition of the instruments which appear in the scene. In this paper we present the classification of minimally invasive instruments using the endoscopic images. The instruments are not modified by markers. The system segments the instruments in the current image and recognizes the instrument type based on three-dimensional instrument models.

  16. Architectures and algorithms for digital image processing; Proceedings of the Meeting, Cannes, France, December 5, 6, 1985

    NASA Technical Reports Server (NTRS)

    Duff, Michael J. B. (Editor); Siegel, Howard J. (Editor); Corbett, Francis J. (Editor)

    1986-01-01

    The conference presents papers on the architectures, algorithms, and applications of image processing. Particular attention is given to a very large scale integration system for image reconstruction from projections, a prebuffer algorithm for instant display of volume data, and an adaptive image sequence filtering scheme based on motion detection. Papers are also presented on a simple, direct practical method of sensing local motion and analyzing local optical flow, image matching techniques, and an automated biological dosimetry system.

  17. Multi-frequency subspace migration for imaging of perfectly conducting, arc-like cracks in full- and limited-view inverse scattering problems

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang

    2015-02-01

    Multi-frequency subspace migration imaging techniques are usually adopted for the non-iterative imaging of unknown electromagnetic targets, such as cracks in concrete walls or bridges and anti-personnel mines in the ground, in the inverse scattering problems. It is confirmed that this technique is very fast, effective, robust, and can not only be applied to full- but also to limited-view inverse problems if a suitable number of incidents and corresponding scattered fields are applied and collected. However, in many works, the application of such techniques is heuristic. With the motivation of such heuristic application, this study analyzes the structure of the imaging functional employed in the subspace migration imaging technique in two-dimensional full- and limited-view inverse scattering problems when the unknown targets are arbitrary-shaped, arc-like perfectly conducting cracks located in the two-dimensional homogeneous space. In contrast to the statistical approach based on statistical hypothesis testing, our approach is based on the fact that the subspace migration imaging functional can be expressed by a linear combination of the Bessel functions of integer order of the first kind. This is based on the structure of the Multi-Static Response (MSR) matrix collected in the far-field at nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition). The investigation of the expression of imaging functionals gives us certain properties of subspace migration and explains why multi-frequency enhances imaging resolution. In particular, we carefully analyze the subspace migration and confirm some properties of imaging when a small number of incident fields are applied. Consequently, we introduce a weighted multi-frequency imaging functional and confirm that it is an improved version of subspace migration in TM mode. Various results of numerical simulations performed on the far-field data affected by large amounts of random noise are similar to the analytical results derived in this study, and they provide a direction for future studies.

  18. Image contrast mechanisms in dynamic friction force microscopy: Antimony particles on graphite

    NASA Astrophysics Data System (ADS)

    Mertens, Felix; Göddenhenrich, Thomas; Dietzel, Dirk; Schirmeisen, Andre

    2017-01-01

    Dynamic Friction Force Microscopy (DFFM) is a technique based on Atomic Force Microscopy (AFM) where resonance oscillations of the cantilever are excited by lateral actuation of the sample. During this process, the AFM tip in contact with the sample undergoes a complex movement which consists of alternating periods of sticking and sliding. Therefore, DFFM can give access to dynamic transition effects in friction that are not accessible by alternative techniques. Using antimony nanoparticles on graphite as a model system, we analyzed how combined influences of friction and topography can effect different experimental configurations of DFFM. Based on the experimental results, for example, contrast inversion between fractional resonance and band excitation imaging strategies to extract reliable tribological information from DFFM images are devised.

  19. A novel edge based embedding in medical images based on unique key generated using sudoku puzzle design.

    PubMed

    Santhi, B; Dheeptha, B

    2016-01-01

    The field of telemedicine has gained immense momentum, owing to the need for transmitting patients' information securely. This paper puts forth a unique method for embedding data in medical images. It is based on edge based embedding and XOR coding. The algorithm proposes a novel key generation technique by utilizing the design of a sudoku puzzle to enhance the security of the transmitted message. The edge blocks of the cover image alone, are utilized to embed the payloads. The least significant bit of the pixel values are changed by XOR coding depending on the data to be embedded and the key generated. Hence the distortion in the stego image is minimized and the information is retrieved accurately. Data is embedded in the RGB planes of the cover image, thus increasing its embedding capacity. Several measures including peak signal noise ratio (PSNR), mean square error (MSE), universal image quality index (UIQI) and correlation coefficient (R) are the image quality measures that have been used to analyze the quality of the stego image. It is evident from the results that the proposed technique outperforms the former methodologies.

  20. A histogram-based technique for rapid vector extraction from PIV photographs

    NASA Technical Reports Server (NTRS)

    Humphreys, William M., Jr.

    1991-01-01

    A new analysis technique, performed totally in the image plane, is proposed which rapidly extracts all available vectors from individual interrogation regions on PIV photographs. The technique avoids the need for using Fourier transforms with the associated computational burden. The data acquisition and analysis procedure is described, and results of a preliminary simulation study to evaluate the accuracy of the technique are presented. Recently obtained PIV photographs are analyzed.

  1. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-04-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.

  2. High sensitive volumetric imaging of renal microcirculation in vivo using ultrahigh sensitive optical microangiography

    NASA Astrophysics Data System (ADS)

    Zhi, Zhongwei; Jung, Yeongri; Jia, Yali; An, Lin; Wang, Ruikang K.

    2011-03-01

    We present a non-invasive, label-free imaging technique called Ultrahigh Sensitive Optical Microangiography (UHSOMAG) for high sensitive volumetric imaging of renal microcirculation. The UHS-OMAG imaging system is based on spectral domain optical coherence tomography (SD-OCT), which uses a 47000 A-line scan rate CCD camera to perform an imaging speed of 150 frames per second that takes only ~7 seconds to acquire a 3D image. The technique, capable of measuring slow blood flow down to 4 um/s, is sensitive enough to image capillary networks, such as peritubular capillaries and glomerulus within renal cortex. We show superior performance of UHS-OMAG in providing depthresolved volumetric images of rich renal microcirculation. We monitored the dynamics of renal microvasculature during renal ischemia and reperfusion. Obvious reduction of renal microvascular density due to renal ischemia was visualized and quantitatively analyzed. This technique can be helpful for the assessment of chronic kidney disease (CKD) which relates to abnormal microvasculature.

  3. Validation of an image-based technique to assess the perceptual quality of clinical chest radiographs with an observer study

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Choudhury, Kingshuk R.; McAdams, H. Page; Foos, David H.; Samei, Ehsan

    2014-03-01

    We previously proposed a novel image-based quality assessment technique1 to assess the perceptual quality of clinical chest radiographs. In this paper, an observer study was designed and conducted to systematically validate this technique. Ten metrics were involved in the observer study, i.e., lung grey level, lung detail, lung noise, riblung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. For each metric, three tasks were successively presented to the observers. In each task, six ROI images were randomly presented in a row and observers were asked to rank the images only based on a designated quality and disregard the other qualities. A range slider on the top of the images was used for observers to indicate the acceptable range based on the corresponding perceptual attribute. Five boardcertificated radiologists from Duke participated in this observer study on a DICOM calibrated diagnostic display workstation and under low ambient lighting conditions. The observer data were analyzed in terms of the correlations between the observer ranking orders and the algorithmic ranking orders. Based on the collected acceptable ranges, quality consistency ranges were statistically derived. The observer study showed that, for each metric, the averaged ranking orders of the participated observers were strongly correlated with the algorithmic orders. For the lung grey level, the observer ranking orders completely accorded with the algorithmic ranking orders. The quality consistency ranges derived from this observer study were close to these derived from our previous study. The observer study indicates that the proposed image-based quality assessment technique provides a robust reflection of the perceptual image quality of the clinical chest radiographs. The derived quality consistency ranges can be used to automatically predict the acceptability of a clinical chest radiograph.

  4. Preliminary studies of enhanced contrast radiography in anatomy and embryology of insects with Elettra synchrotron light

    NASA Astrophysics Data System (ADS)

    Hönnicke, M. G.; Foerster, L. A.; Navarro-Silva, M. A.; Menk, R.-H.; Rigon, L.; Cusatis, C.

    2005-08-01

    Enhanced contrast X-ray imaging is achieved by exploiting the real part of the refraction index, which is responsible for the phase shifts, in addition to the imaginary part, which is responsible for the absorption. Such techniques are called X-ray phase contrast imaging. An analyzer-based X-ray phase contrast imaging set-up with Diffraction Enhanced Imaging processing (DEI) were used for preliminary studies in anatomy and embryology of insects. Parasitized stinkbug and moth eggs used as control agents of pests in vegetables and adult stinkbugs and mosquitoes ( Aedes aegypti) were used as samples. The experimental setup was mounted in the SYRMEP beamline at ELETTRA. Images were obtained using a high spatial resolution CCD detector (pixel size 14×14 μm 2) coupled with magnifying optics. Analyzer-based X-ray phase contrast images (PCI) and edge detection images show contrast and details not observed with conventional synchrotron radiography and open the possibility for future study in the embryonic development of insects.

  5. Signal-to-noise ratio analysis and evaluation of the Hadamard imaging technique

    NASA Technical Reports Server (NTRS)

    Jobson, D. J.; Katzberg, S. J.; Spiers, R. B., Jr.

    1977-01-01

    The signal-to-noise ratio performance of the Hadamard imaging technique is analyzed and an experimental evaluation of a laboratory Hadamard imager is presented. A comparison between the performances of Hadamard and conventional imaging techniques shows that the Hadamard technique is superior only when the imaging objective lens is required to have an effective F (focus) number of about 2 or slower.

  6. A human visual based binarization technique for histological images

    NASA Astrophysics Data System (ADS)

    Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos

    2017-05-01

    In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.

  7. A half-blind color image hiding and encryption method in fractional Fourier domains

    NASA Astrophysics Data System (ADS)

    Ge, Fan; Chen, Linfei; Zhao, Daomu

    2008-09-01

    We have proposed a new technique for digital image encryption and hiding based on fractional Fourier transforms with double random phases. An original hidden image is encrypted two times and the keys are increased to strengthen information protection. Color image hiding and encryption with wavelength multiplexing is proposed by embedding and encryption in R, G and B three channels. The robustness against occlusion attacks and noise attacks are analyzed. And computer simulations are presented with the corresponding results.

  8. The use of multisensor images for Earth Science applications

    NASA Technical Reports Server (NTRS)

    Evans, D.; Stromberg, B.

    1983-01-01

    The use of more than one remote sensing technique is particularly important for Earth Science applications because of the compositional and textural information derivable from the images. The ability to simultaneously analyze images acquired by different sensors requires coregistration of the multisensor image data sets. In order to insure pixel to pixel registration in areas of high relief, images must be rectified to eliminate topographic distortions. Coregistered images can be analyzed using a variety of multidimensional techniques and the acquired knowledge of topographic effects in the images can be used in photogeologic interpretations.

  9. Computational polarization difference underwater imaging based on image fusion

    NASA Astrophysics Data System (ADS)

    Han, Hongwei; Zhang, Xiaohui; Guan, Feng

    2016-01-01

    Polarization difference imaging can improve the quality of images acquired underwater, whether the background and veiling light are unpolarized or partial polarized. Computational polarization difference imaging technique which replaces the mechanical rotation of polarization analyzer and shortens the time spent to select the optimum orthogonal ǁ and ⊥axes is the improvement of the conventional PDI. But it originally gets the output image by setting the weight coefficient manually to an identical constant for all pixels. In this paper, a kind of algorithm is proposed to combine the Q and U parameters of the Stokes vector through pixel-level image fusion theory based on non-subsample contourlet transform. The experimental system built by the green LED array with polarizer to illuminate a piece of flat target merged in water and the CCD with polarization analyzer to obtain target image under different angle is used to verify the effect of the proposed algorithm. The results showed that the output processed by our algorithm could show more details of the flat target and had higher contrast compared to original computational polarization difference imaging.

  10. Image processing based detection of lung cancer on CT scan images

    NASA Astrophysics Data System (ADS)

    Abdillah, Bariqi; Bustamam, Alhadi; Sarwinda, Devvi

    2017-10-01

    In this paper, we implement and analyze the image processing method for detection of lung cancer. Image processing techniques are widely used in several medical problems for picture enhancement in the detection phase to support the early medical treatment. In this research we proposed a detection method of lung cancer based on image segmentation. Image segmentation is one of intermediate level in image processing. Marker control watershed and region growing approach are used to segment of CT scan image. Detection phases are followed by image enhancement using Gabor filter, image segmentation, and features extraction. From the experimental results, we found the effectiveness of our approach. The results show that the best approach for main features detection is watershed with masking method which has high accuracy and robust.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brun, E., E-mail: emmanuel.brun@esrf.fr; Grandl, S.; Sztrókay-Gaul, A.

    Purpose: Phase contrast computed tomography has emerged as an imaging method, which is able to outperform present day clinical mammography in breast tumor visualization while maintaining an equivalent average dose. To this day, no segmentation technique takes into account the specificity of the phase contrast signal. In this study, the authors propose a new mathematical framework for human-guided breast tumor segmentation. This method has been applied to high-resolution images of excised human organs, each of several gigabytes. Methods: The authors present a segmentation procedure based on the viscous watershed transform and demonstrate the efficacy of this method on analyzer basedmore » phase contrast images. The segmentation of tumors inside two full human breasts is then shown as an example of this procedure’s possible applications. Results: A correct and precise identification of the tumor boundaries was obtained and confirmed by manual contouring performed independently by four experienced radiologists. Conclusions: The authors demonstrate that applying the watershed viscous transform allows them to perform the segmentation of tumors in high-resolution x-ray analyzer based phase contrast breast computed tomography images. Combining the additional information provided by the segmentation procedure with the already high definition of morphological details and tissue boundaries offered by phase contrast imaging techniques, will represent a valuable multistep procedure to be used in future medical diagnostic applications.« less

  12. Universal in vivo Textural Model for Human Skin based on Optical Coherence Tomograms.

    PubMed

    Adabi, Saba; Hosseinzadeh, Matin; Noei, Shahryar; Conforto, Silvia; Daveluy, Steven; Clayton, Anne; Mehregan, Darius; Nasiriavanaki, Mohammadreza

    2017-12-20

    Currently, diagnosis of skin diseases is based primarily on the visual pattern recognition skills and expertise of the physician observing the lesion. Even though dermatologists are trained to recognize patterns of morphology, it is still a subjective visual assessment. Tools for automated pattern recognition can provide objective information to support clinical decision-making. Noninvasive skin imaging techniques provide complementary information to the clinician. In recent years, optical coherence tomography (OCT) has become a powerful skin imaging technique. According to specific functional needs, skin architecture varies across different parts of the body, as do the textural characteristics in OCT images. There is, therefore, a critical need to systematically analyze OCT images from different body sites, to identify their significant qualitative and quantitative differences. Sixty-three optical and textural features extracted from OCT images of healthy and diseased skin are analyzed and, in conjunction with decision-theoretic approaches, used to create computational models of the diseases. We demonstrate that these models provide objective information to the clinician to assist in the diagnosis of abnormalities of cutaneous microstructure, and hence, aid in the determination of treatment. Specifically, we demonstrate the performance of this methodology on differentiating basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) from healthy tissue.

  13. TOF-SIMS imaging technique with information entropy

    NASA Astrophysics Data System (ADS)

    Aoyagi, Satoka; Kawashima, Y.; Kudo, Masahiro

    2005-05-01

    Time-of-flight secondary ion mass spectrometry (TOF-SIMS) is capable of chemical imaging of proteins on insulated samples in principal. However, selection of specific peaks related to a particular protein, which are necessary for chemical imaging, out of numerous candidates had been difficult without an appropriate spectrum analysis technique. Therefore multivariate analysis techniques, such as principal component analysis (PCA), and analysis with mutual information defined by information theory, have been applied to interpret SIMS spectra of protein samples. In this study mutual information was applied to select specific peaks related to proteins in order to obtain chemical images. Proteins on insulated materials were measured with TOF-SIMS and then SIMS spectra were analyzed by means of the analysis method based on the comparison using mutual information. Chemical mapping of each protein was obtained using specific peaks related to each protein selected based on values of mutual information. The results of TOF-SIMS images of proteins on the materials provide some useful information on properties of protein adsorption, optimality of immobilization processes and reaction between proteins. Thus chemical images of proteins by TOF-SIMS contribute to understand interactions between material surfaces and proteins and to develop sophisticated biomaterials.

  14. [Improvement of magnetic resonance phase unwrapping method based on Goldstein Branch-cut algorithm].

    PubMed

    Guo, Lin; Kang, Lili; Wang, Dandan

    2013-02-01

    The phase information of magnetic resonance (MR) phase image can be used in many MR imaging techniques, but phase wrapping of the images often results in inaccurate phase information and phase unwrapping is essential for MR imaging techniques. In this paper we analyze the causes of errors in phase unwrapping with the commonly used Goldstein Brunch-cut algorithm and propose an improved algorithm. During the unwrapping process, masking, filtering, dipole- remover preprocessor, and the Prim algorithm of the minimum spanning tree were introduced to optimize the residues essential for the Goldstein Brunch-cut algorithm. Experimental results showed that the residues, branch-cuts and continuous unwrapped phase surface were efficiently reduced and the quality of MR phase images was obviously improved with the proposed method.

  15. Identification and quantitation of semi-crystalline microplastics using image analysis and differential scanning calorimetry.

    PubMed

    Rodríguez Chialanza, Mauricio; Sierra, Ignacio; Pérez Parada, Andrés; Fornaro, Laura

    2018-06-01

    There are several techniques used to analyze microplastics. These are often based on a combination of visual and spectroscopic techniques. Here we introduce an alternative workflow for identification and mass quantitation through a combination of optical microscopy with image analysis (IA) and differential scanning calorimetry (DSC). We studied four synthetic polymers with environmental concern: low and high density polyethylene (LDPE and HDPE, respectively), polypropylene (PP), and polyethylene terephthalate (PET). Selected experiments were conducted to investigate (i) particle characterization and counting procedures based on image analysis with open-source software, (ii) chemical identification of microplastics based on DSC signal processing, (iii) dependence of particle size on DSC signal, and (iv) quantitation of microplastics mass based on DSC signal. We describe the potential and limitations of these techniques to increase reliability for microplastic analysis. Particle size demonstrated to have particular incidence in the qualitative and quantitative performance of DSC signals. Both, identification (based on characteristic onset temperature) and mass quantitation (based on heat flow) showed to be affected by particle size. As a result, a proper sample treatment which includes sieving of suspended particles is particularly required for this analytical approach.

  16. Edge Detection Method Based on Neural Networks for COMS MI Images

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Ho; Park, Eun-Bin; Woo, Sun-Hee

    2016-12-01

    Communication, Ocean And Meteorological Satellite (COMS) Meteorological Imager (MI) images are processed for radiometric and geometric correction from raw image data. When intermediate image data are matched and compared with reference landmark images in the geometrical correction process, various techniques for edge detection can be applied. It is essential to have a precise and correct edged image in this process, since its matching with the reference is directly related to the accuracy of the ground station output images. An edge detection method based on neural networks is applied for the ground processing of MI images for obtaining sharp edges in the correct positions. The simulation results are analyzed and characterized by comparing them with the results of conventional methods, such as Sobel and Canny filters.

  17. Use of kurtosis for locating deep blood vessels in raw speckle imaging using a homogeneity representation.

    PubMed

    Peregrina-Barreto, Hayde; Perez-Corona, Elizabeth; Rangel-Magdaleno, Jose; Ramos-Garcia, Ruben; Chiu, Roger; Ramirez-San-Juan, Julio C

    2017-06-01

    Visualization of deep blood vessels in speckle images is an important task as it is used to analyze the dynamics of the blood flow and the health status of biological tissue. Laser speckle imaging is a wide-field optical technique to measure relative blood flow speed based on the local speckle contrast analysis. However, it has been reported that this technique is limited to certain deep blood vessels (about ? = 300 ?? ? m ) because of the high scattering of the sample; beyond this depth, the quality of the vessel’s image decreases. The use of a representation based on homogeneity values, computed from the co-occurrence matrix, is proposed as it provides an improved vessel definition and its corresponding diameter. Moreover, a methodology is proposed for automatic blood vessel location based on the kurtosis analysis. Results were obtained from the different skin phantoms, showing that it is possible to identify the vessel region for different morphologies, even up to 900 ?? ? m in depth.

  18. An algebraic algorithm for nonuniformity correction in focal-plane arrays.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Hardie, Russell C

    2002-09-01

    A scene-based algorithm is developed to compensate for bias nonuniformity in focal-plane arrays. Nonuniformity can be extremely problematic, especially for mid- to far-infrared imaging systems. The technique is based on use of estimates of interframe subpixel shifts in an image sequence, in conjunction with a linear-interpolation model for the motion, to extract information on the bias nonuniformity algebraically. The performance of the proposed algorithm is analyzed by using real infrared and simulated data. One advantage of this technique is its simplicity; it requires relatively few frames to generate an effective correction matrix, thereby permitting the execution of frequent on-the-fly nonuniformity correction as drift occurs. Additionally, the performance is shown to exhibit considerable robustness with respect to lack of the common types of temporal and spatial irradiance diversity that are typically required by statistical scene-based nonuniformity correction techniques.

  19. Nanophotonic Image Sensors

    PubMed Central

    Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R. S.

    2016-01-01

    The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial‐based THz image sensors, filter‐free nanowire image sensors and nanostructured‐based multispectral image sensors. This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. PMID:27239941

  20. Segmentation Approach Towards Phase-Contrast Microscopic Images of Activated Sludge to Monitor the Wastewater Treatment.

    PubMed

    Khan, Muhammad Burhan; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Lai, Koon Chun

    2017-12-01

    Image processing and analysis is an effective tool for monitoring and fault diagnosis of activated sludge (AS) wastewater treatment plants. The AS image comprise of flocs (microbial aggregates) and filamentous bacteria. In this paper, nine different approaches are proposed for image segmentation of phase-contrast microscopic (PCM) images of AS samples. The proposed strategies are assessed for their effectiveness from the perspective of microscopic artifacts associated with PCM. The first approach uses an algorithm that is based on the idea that different color space representation of images other than red-green-blue may have better contrast. The second uses an edge detection approach. The third strategy, employs a clustering algorithm for the segmentation and the fourth applies local adaptive thresholding. The fifth technique is based on texture-based segmentation and the sixth uses watershed algorithm. The seventh adopts a split-and-merge approach. The eighth employs Kittler's thresholding. Finally, the ninth uses a top-hat and bottom-hat filtering-based technique. The approaches are assessed, and analyzed critically with reference to the artifacts of PCM. Gold approximations of ground truth images are prepared to assess the segmentations. Overall, the edge detection-based approach exhibits the best results in terms of accuracy, and the texture-based algorithm in terms of false negative ratio. The respective scenarios are explained for suitability of edge detection and texture-based algorithms.

  1. Estimating Number of People Using Calibrated Monocular Camera Based on Geometrical Analysis of Surface Area

    NASA Astrophysics Data System (ADS)

    Arai, Hiroyuki; Miyagawa, Isao; Koike, Hideki; Haseyama, Miki

    We propose a novel technique for estimating the number of people in a video sequence; it has the advantages of being stable even in crowded situations and needing no ground-truth data. By analyzing the geometrical relationships between image pixels and their intersection volumes in the real world quantitatively, a foreground image directly indicates the number of people. Because foreground detection is possible even in crowded situations, the proposed method can be applied in such situations. Moreover, it can estimate the number of people in an a priori manner, so it needs no ground-truth data unlike existing feature-based estimation techniques. Experiments show the validity of the proposed method.

  2. Generative technique for dynamic infrared image sequences

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Cao, Zhiguo; Zhang, Tianxu

    2001-09-01

    The generative technique of the dynamic infrared image was discussed in this paper. Because infrared sensor differs from CCD camera in imaging mechanism, it generates the infrared image by incepting the infrared radiation of scene (including target and background). The infrared imaging sensor is affected deeply by the atmospheric radiation, the environmental radiation and the attenuation of atmospheric radiation transfers. Therefore at first in this paper the imaging influence of all kinds of the radiations was analyzed and the calculation formula of radiation was provided, in addition, the passive scene and the active scene were analyzed separately. Then the methods of calculation in the passive scene were provided, and the functions of the scene model, the atmospheric transmission model and the material physical attribute databases were explained. Secondly based on the infrared imaging model, the design idea, the achievable way and the software frame for the simulation software of the infrared image sequence were introduced in SGI workstation. Under the guidance of the idea above, in the third segment of the paper an example of simulative infrared image sequences was presented, which used the sea and sky as background and used the warship as target and used the aircraft as eye point. At last the simulation synthetically was evaluated and the betterment scheme was presented.

  3. The research and development of the adaptive optics in ophthalmology

    NASA Astrophysics Data System (ADS)

    Wu, Chuhan; Zhang, Xiaofang; Chen, Weilin

    2015-08-01

    Recently the combination of adaptive optics and ophthalmology has made great progress and become highly effective. The retina disease is diagnosed by retina imaging technique based on scanning optical system, so the scanning of eye requires optical system characterized by great ability of anti-moving and optical aberration correction. The adaptive optics possesses high level of adaptability and is available for real time imaging, which meets the requirement of medical retina detection with accurate images. Now the Scanning Laser Ophthalmoscope and the Optical Coherence Tomography are widely used, which are the core techniques in the area of medical retina detection. Based on the above techniques, in China, a few adaptive optics systems used for eye medical scanning have been designed by some researchers from The Institute of Optics And Electronics of CAS(The Chinese Academy of Sciences); some foreign research institutions have adopted other methods to eliminate the interference of eye moving and optical aberration; there are many relevant patents at home and abroad. In this paper, the principles and relevant technique details of the Scanning Laser Ophthalmoscope and the Optical Coherence Tomography are described. And the recent development and progress of adaptive optics in the field of eye retina imaging are analyzed and summarized.

  4. Planar Laser Imaging of Sprays for Liquid Rocket Studies

    NASA Technical Reports Server (NTRS)

    Lee, W.; Pal, S.; Ryan, H. M.; Strakey, P. A.; Santoro, Robert J.

    1990-01-01

    A planar laser imaging technique which incorporates an optical polarization ratio technique for droplet size measurement was studied. A series of pressure atomized water sprays were studied with this technique and compared with measurements obtained using a Phase Doppler Particle Analyzer. In particular, the effects of assuming a logarithmic normal distribution function for the droplet size distribution within a spray was evaluated. Reasonable agreement between the instrument was obtained for the geometric mean diameter of the droplet distribution. However, comparisons based on the Sauter mean diameter show larger discrepancies, essentially because of uncertainties in the appropriate standard deviation to be applied for the polarization ratio technique. Comparisons were also made between single laser pulse (temporally resolved) measurements with multiple laser pulse visualizations of the spray.

  5. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  6. Near-infrared spectral image analysis of pork marbling based on Gabor filter and wide line detector techniques.

    PubMed

    Huang, Hui; Liu, Li; Ngadi, Michael O; Gariépy, Claude; Prasher, Shiv O

    2014-01-01

    Marbling is an important quality attribute of pork. Detection of pork marbling usually involves subjective scoring, which raises the efficiency costs to the processor. In this study, the ability to predict pork marbling using near-infrared (NIR) hyperspectral imaging (900-1700 nm) and the proper image processing techniques were studied. Near-infrared images were collected from pork after marbling evaluation according to current standard chart from the National Pork Producers Council. Image analysis techniques-Gabor filter, wide line detector, and spectral averaging-were applied to extract texture, line, and spectral features, respectively, from NIR images of pork. Samples were grouped into calibration and validation sets. Wavelength selection was performed on calibration set by stepwise regression procedure. Prediction models of pork marbling scores were built using multiple linear regressions based on derivatives of mean spectra and line features at key wavelengths. The results showed that the derivatives of both texture and spectral features produced good results, with correlation coefficients of validation of 0.90 and 0.86, respectively, using wavelengths of 961, 1186, and 1220 nm. The results revealed the great potential of the Gabor filter for analyzing NIR images of pork for the effective and efficient objective evaluation of pork marbling.

  7. Whole-body and Whole-Organ Clearing and Imaging Techniques with Single-Cell Resolution: Toward Organism-Level Systems Biology in Mammals.

    PubMed

    Susaki, Etsuo A; Ueda, Hiroki R

    2016-01-21

    Organism-level systems biology aims to identify, analyze, control and design cellular circuits in organisms. Many experimental and computational approaches have been developed over the years to allow us to conduct these studies. Some of the most powerful methods are based on using optical imaging in combination with fluorescent labeling, and for those one of the long-standing stumbling blocks has been tissue opacity. Recently, the solutions to this problem have started to emerge based on whole-body and whole-organ clearing techniques that employ innovative tissue-clearing chemistry. Here, we review these advancements and discuss how combining new clearing techniques with high-performing fluorescent proteins or small molecule tags, rapid volume imaging and efficient image informatics is resulting in comprehensive and quantitative organ-wide, single-cell resolution experimental data. These technologies are starting to yield information on connectivity and dynamics in cellular circuits at unprecedented resolution, and bring us closer to system-level understanding of physiology and diseases of complex mammalian systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Image quality improvement using model-based iterative reconstruction in low dose chest CT for children with necrotizing pneumonia.

    PubMed

    Sun, Jihang; Yu, Tong; Liu, Jinrong; Duan, Xiaomin; Hu, Di; Liu, Yong; Peng, Yun

    2017-03-16

    Model-based iterative reconstruction (MBIR) is a promising reconstruction method which could improve CT image quality with low radiation dose. The purpose of this study was to demonstrate the advantage of using MBIR for noise reduction and image quality improvement in low dose chest CT for children with necrotizing pneumonia, over the adaptive statistical iterative reconstruction (ASIR) and conventional filtered back-projection (FBP) technique. Twenty-six children with necrotizing pneumonia (aged 2 months to 11 years) who underwent standard of care low dose CT scans were included. Thinner-slice (0.625 mm) images were retrospectively reconstructed using MBIR, ASIR and conventional FBP techniques. Image noise and signal-to-noise ratio (SNR) for these thin-slice images were measured and statistically analyzed using ANOVA. Two radiologists independently analyzed the image quality for detecting necrotic lesions, and results were compared using a Friedman's test. Radiation dose for the overall patient population was 0.59 mSv. There was a significant improvement in the high-density and low-contrast resolution of the MBIR reconstruction resulting in more detection and better identification of necrotic lesions (38 lesions in 0.625 mm MBIR images vs. 29 lesions in 0.625 mm FBP images). The subjective display scores (mean ± standard deviation) for the detection of necrotic lesions were 5.0 ± 0.0, 2.8 ± 0.4 and 2.5 ± 0.5 with MBIR, ASIR and FBP reconstruction, respectively, and the respective objective image noise was 13.9 ± 4.0HU, 24.9 ± 6.6HU and 33.8 ± 8.7HU. The image noise decreased by 58.9 and 26.3% in MBIR images as compared to FBP and ASIR images. Additionally, the SNR of MBIR images was significantly higher than FBP images and ASIR images. The quality of chest CT images obtained by MBIR in children with necrotizing pneumonia was significantly improved by the MBIR technique as compared to the ASIR and FBP reconstruction, to provide a more confident and accurate diagnosis for necrotizing pneumonia.

  9. Accelerometer-Based Method for Extracting Respiratory and Cardiac Gating Information for Dual Gating during Nuclear Medicine Imaging

    PubMed Central

    Pänkäälä, Mikko; Paasio, Ari

    2014-01-01

    Both respiratory and cardiac motions reduce the quality and consistency of medical imaging specifically in nuclear medicine imaging. Motion artifacts can be eliminated by gating the image acquisition based on the respiratory phase and cardiac contractions throughout the medical imaging procedure. Electrocardiography (ECG), 3-axis accelerometer, and respiration belt data were processed and analyzed from ten healthy volunteers. Seismocardiography (SCG) is a noninvasive accelerometer-based method that measures accelerations caused by respiration and myocardial movements. This study was conducted to investigate the feasibility of the accelerometer-based method in dual gating technique. The SCG provides accelerometer-derived respiratory (ADR) data and accurate information about quiescent phases within the cardiac cycle. The correct information about the status of ventricles and atria helps us to create an improved estimate for quiescent phases within a cardiac cycle. The correlation of ADR signals with the reference respiration belt was investigated using Pearson correlation. High linear correlation was observed between accelerometer-based measurement and reference measurement methods (ECG and Respiration belt). Above all, due to the simplicity of the proposed method, the technique has high potential to be applied in dual gating in clinical cardiac positron emission tomography (PET) to obtain motion-free images in the future. PMID:25120563

  10. [Development of the automatic dental X-ray film processor].

    PubMed

    Bai, J; Chen, H

    1999-07-01

    This paper introduces a multiple-point detecting technique of the density of dental X-ray films. With the infrared ray multiple-point detecting technique, a single-chip microcomputer control system is used to analyze the effectiveness of the film-developing in real time in order to achieve a good image. Based on the new technology, We designed the intelligent automatic dental X-ray film processing.

  11. Segmenting overlapping nano-objects in atomic force microscopy image

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Han, Yuexing; Li, Qing; Wang, Bing; Konagaya, Akihiko

    2018-01-01

    Recently, techniques for nanoparticles have rapidly been developed for various fields, such as material science, medical, and biology. In particular, methods of image processing have widely been used to automatically analyze nanoparticles. A technique to automatically segment overlapping nanoparticles with image processing and machine learning is proposed. Here, two tasks are necessary: elimination of image noises and action of the overlapping shapes. For the first task, mean square error and the seed fill algorithm are adopted to remove noises and improve the quality of the original image. For the second task, four steps are needed to segment the overlapping nanoparticles. First, possibility split lines are obtained by connecting the high curvature pixels on the contours. Second, the candidate split lines are classified with a machine learning algorithm. Third, the overlapping regions are detected with the method of density-based spatial clustering of applications with noise (DBSCAN). Finally, the best split lines are selected with a constrained minimum value. We give some experimental examples and compare our technique with two other methods. The results can show the effectiveness of the proposed technique.

  12. Improved accuracy in Wigner-Ville distribution-based sizing of rod-shaped particle using flip and replication technique

    NASA Astrophysics Data System (ADS)

    Chuamchaitrakool, Porntip; Widjaja, Joewono; Yoshimura, Hiroyuki

    2018-01-01

    A method for improving accuracy in Wigner-Ville distribution (WVD)-based particle size measurements from inline holograms using flip and replication technique (FRT) is proposed. The FRT extends the length of hologram signals being analyzed, yielding better spatial-frequency resolution of the WVD output. Experimental results verify reduction in measurement error as the length of the hologram signals increases. The proposed method is suitable for particle sizing from holograms recorded using small-sized image sensors.

  13. Spectral images of HD 199178

    NASA Technical Reports Server (NTRS)

    Neff, J. E.; Vilhu, O.; Walter, F. M.

    1988-01-01

    High-resolution IUE spectra of the Mg II k line of HD 199178 were analyzed, applying spectral imaging techniques to derive an image of the chromospheric structure and to study the transient behavior of the chromosphere. All spectra in the IUE archives were uniformly reduced and analyzed. Results are compared with ground-based observations of the photosphere. Four ultraviolet flares on HD 199178 are observed; 3 of these occurred at roughly the same rotational phase. There is no clear phase-dependence of the SWP line fluxes, but there is for the Mg II k flux. The emission centroid of the Mg II k line varies in a quasi-sinusoidal fashion, presumably due to the rotation of a nonuniform chromosphere.

  14. A novel multiphoton microscopy images segmentation method based on superpixel and watershed.

    PubMed

    Wu, Weilin; Lin, Jinyong; Wang, Shu; Li, Yan; Liu, Mingyu; Liu, Gaoqiang; Cai, Jianyong; Chen, Guannan; Chen, Rong

    2017-04-01

    Multiphoton microscopy (MPM) imaging technique based on two-photon excited fluorescence (TPEF) and second harmonic generation (SHG) shows fantastic performance for biological imaging. The automatic segmentation of cellular architectural properties for biomedical diagnosis based on MPM images is still a challenging issue. A novel multiphoton microscopy images segmentation method based on superpixels and watershed (MSW) is presented here to provide good segmentation results for MPM images. The proposed method uses SLIC superpixels instead of pixels to analyze MPM images for the first time. The superpixels segmentation based on a new distance metric combined with spatial, CIE Lab color space and phase congruency features, divides the images into patches which keep the details of the cell boundaries. Then the superpixels are used to reconstruct new images by defining an average value of superpixels as image pixels intensity level. Finally, the marker-controlled watershed is utilized to segment the cell boundaries from the reconstructed images. Experimental results show that cellular boundaries can be extracted from MPM images by MSW with higher accuracy and robustness. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers.

    PubMed

    López, Yuri Álvarez; Lorenzo, José Ángel Martínez

    2017-01-15

    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated.

  16. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers

    PubMed Central

    Álvarez López, Yuri; Martínez Lorenzo, José Ángel

    2017-01-01

    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated. PMID:28098841

  17. Comic image understanding based on polygon detection

    NASA Astrophysics Data System (ADS)

    Li, Luyuan; Wang, Yongtao; Tang, Zhi; Liu, Dong

    2013-01-01

    Comic image understanding aims to automatically decompose scanned comic page images into storyboards and then identify the reading order of them, which is the key technique to produce digital comic documents that are suitable for reading on mobile devices. In this paper, we propose a novel comic image understanding method based on polygon detection. First, we segment a comic page images into storyboards by finding the polygonal enclosing box of each storyboard. Then, each storyboard can be represented by a polygon, and the reading order of them is determined by analyzing the relative geometric relationship between each pair of polygons. The proposed method is tested on 2000 comic images from ten printed comic series, and the experimental results demonstrate that it works well on different types of comic images.

  18. Image Analysis Based on Soft Computing and Applied on Space Shuttle During the Liftoff Process

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve J.

    2007-01-01

    Imaging techniques based on Soft Computing (SC) and developed at Kennedy Space Center (KSC) have been implemented on a variety of prototype applications related to the safety operation of the Space Shuttle during the liftoff process. These SC-based prototype applications include detection and tracking of moving Foreign Objects Debris (FOD) during the Space Shuttle liftoff, visual anomaly detection on slidewires used in the emergency egress system for the Space Shuttle at the laJlIlch pad, and visual detection of distant birds approaching the Space Shuttle launch pad. This SC-based image analysis capability developed at KSC was also used to analyze images acquired during the accident of the Space Shuttle Columbia and estimate the trajectory and velocity of the foam that caused the accident.

  19. FIM, a Novel FTIR-Based Imaging Method for High Throughput Locomotion Analysis

    PubMed Central

    Otto, Nils; Löpmeier, Tim; Valkov, Dimitar; Jiang, Xiaoyi; Klämbt, Christian

    2013-01-01

    We designed a novel imaging technique based on frustrated total internal reflection (FTIR) to obtain high resolution and high contrast movies. This FTIR-based Imaging Method (FIM) is suitable for a wide range of biological applications and a wide range of organisms. It operates at all wavelengths permitting the in vivo detection of fluorescent proteins. To demonstrate the benefits of FIM, we analyzed large groups of crawling Drosophila larvae. The number of analyzable locomotion tracks was increased by implementing a new software module capable of preserving larval identity during most collision events. This module is integrated in our new tracking program named FIMTrack which subsequently extracts a number of features required for the analysis of complex locomotion phenotypes. FIM enables high throughput screening for even subtle behavioral phenotypes. We tested this newly developed setup by analyzing locomotion deficits caused by the glial knockdown of several genes. Suppression of kinesin heavy chain (khc) or rab30 function led to contraction pattern or head sweeping defects, which escaped in previous analysis. Thus, FIM permits forward genetic screens aimed to unravel the neural basis of behavior. PMID:23349775

  20. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding

    PubMed Central

    BahadarKhan, Khan; A Khaliq, Amir; Shahid, Muhammad

    2016-01-01

    Diabetic Retinopathy (DR) harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE) and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (STructured Analysis of the REtina) databases along with the ground truth data that has been precisely marked by the experts. PMID:27441646

  1. Hyperspectral image visualization based on a human visual model

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Peng, Honghong; Fairchild, Mark D.; Montag, Ethan D.

    2008-02-01

    Hyperspectral image data can provide very fine spectral resolution with more than 200 bands, yet presents challenges for visualization techniques for displaying such rich information on a tristimulus monitor. This study developed a visualization technique by taking advantage of both the consistent natural appearance of a true color image and the feature separation of a PCA image based on a biologically inspired visual attention model. The key part is to extract the informative regions in the scene. The model takes into account human contrast sensitivity functions and generates a topographic saliency map for both images. This is accomplished using a set of linear "center-surround" operations simulating visual receptive fields as the difference between fine and coarse scales. A difference map between the saliency map of the true color image and that of the PCA image is derived and used as a mask on the true color image to select a small number of interesting locations where the PCA image has more salient features than available in the visible bands. The resulting representations preserve hue for vegetation, water, road etc., while the selected attentional locations may be analyzed by more advanced algorithms.

  2. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.

  3. Label-free chemical imaging of live Euglena gracilis by high-speed SRS spectral microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wakisaka, Yoshifumi; Suzuki, Yuta; Tokunaga, Kyoya; Hirose, Misa; Domon, Ryota; Akaho, Rina; Kuroshima, Mai; Tsumura, Norimichi; Shimobaba, Tomoyoshi; Iwata, Osamu; Suzuki, Kengo; Nakashima, Ayaka; Goda, Keisuke; Ozeki, Yasuyuki

    2016-03-01

    Microbes, especially microalgae, have recently been of great interest for developing novel biofuels, drugs, and biomaterials. Imaging-based screening of live cells can provide high selectivity and is attractive for efficient bio-production from microalgae. Although conventional cellular screening techniques use cell labeling, labeling of microbes is still under development and can interfere with their cellular functions. Furthermore, since live microbes move and change their shapes rapidly, a high-speed imaging technique is required to suppress motion artifacts. Stimulated Raman scattering (SRS) microscopy allows for label-free and high-speed spectral imaging, which helps us visualize chemical components inside biological cells and tissues. Here we demonstrate high-speed SRS imaging, with temporal resolution of 0.14 seconds, of intracellular distributions of lipid, polysaccharide, and chlorophyll concentrations in rapidly moving Euglena gracilis, a unicellular phytoflagellate. Furthermore, we show that our method allows us to analyze the amount of chemical components inside each living cell. Our results indicate that SRS imaging may be applied to label-free screening of living microbes based on chemical information.

  4. Electromagnetic Vortex-Based Radar Imaging Using a Single Receiving Antenna: Theory and Experimental Results

    PubMed Central

    Yuan, Tiezhu; Wang, Hongqiang; Cheng, Yongqiang; Qin, Yuliang

    2017-01-01

    Radar imaging based on electromagnetic vortex can achieve azimuth resolution without relative motion. The present paper investigates this imaging technique with the use of a single receiving antenna through theoretical analysis and experimental results. Compared with the use of multiple receiving antennas, the echoes from a single receiver cannot be used directly for image reconstruction using Fourier method. The reason is revealed by using the point spread function. An additional phase is compensated for each mode before imaging process based on the array parameters and the elevation of the targets. A proof-of-concept imaging system based on a circular phased array is created, and imaging experiments of corner-reflector targets are performed in an anechoic chamber. The azimuthal image is reconstructed by the use of Fourier transform and spectral estimation methods. The azimuth resolution of the two methods is analyzed and compared through experimental data. The experimental results verify the principle of azimuth resolution and the proposed phase compensation method. PMID:28335487

  5. Identification of Metal Oxide Nanoparticles in Histological Samples by Enhanced Darkfield Microscopy and Hyperspectral Mapping.

    PubMed

    Roth, Gary A; Sosa Peña, Maria del Pilar; Neu-Baker, Nicole M; Tahiliani, Sahil; Brenner, Sara A

    2015-12-08

    Nanomaterials are increasingly prevalent throughout industry, manufacturing, and biomedical research. The need for tools and techniques that aid in the identification, localization, and characterization of nanoscale materials in biological samples is on the rise. Currently available methods, such as electron microscopy, tend to be resource-intensive, making their use prohibitive for much of the research community. Enhanced darkfield microscopy complemented with a hyperspectral imaging system may provide a solution to this bottleneck by enabling rapid and less expensive characterization of nanoparticles in histological samples. This method allows for high-contrast nanoscale imaging as well as nanomaterial identification. For this technique, histological tissue samples are prepared as they would be for light-based microscopy. First, positive control samples are analyzed to generate the reference spectra that will enable the detection of a material of interest in the sample. Negative controls without the material of interest are also analyzed in order to improve specificity (reduce false positives). Samples can then be imaged and analyzed using methods and software for hyperspectral microscopy or matched against these reference spectra in order to provide maps of the location of materials of interest in a sample. The technique is particularly well-suited for materials with highly unique reflectance spectra, such as noble metals, but is also applicable to other materials, such as semi-metallic oxides. This technique provides information that is difficult to acquire from histological samples without the use of electron microscopy techniques, which may provide higher sensitivity and resolution, but are vastly more resource-intensive and time-consuming than light microscopy.

  6. From heuristic optimization to dictionary learning: a review and comprehensive comparison of image denoising algorithms.

    PubMed

    Shao, Ling; Yan, Ruomei; Li, Xuelong; Liu, Yan

    2014-07-01

    Image denoising is a well explored topic in the field of image processing. In the past several decades, the progress made in image denoising has benefited from the improved modeling of natural images. In this paper, we introduce a new taxonomy based on image representations for a better understanding of state-of-the-art image denoising techniques. Within each category, several representative algorithms are selected for evaluation and comparison. The experimental results are discussed and analyzed to determine the overall advantages and disadvantages of each category. In general, the nonlocal methods within each category produce better denoising results than local ones. In addition, methods based on overcomplete representations using learned dictionaries perform better than others. The comprehensive study in this paper would serve as a good reference and stimulate new research ideas in image denoising.

  7. Deep learning and non-negative matrix factorization in recognition of mammograms

    NASA Astrophysics Data System (ADS)

    Swiderski, Bartosz; Kurek, Jaroslaw; Osowski, Stanislaw; Kruk, Michal; Barhoumi, Walid

    2017-02-01

    This paper presents novel approach to the recognition of mammograms. The analyzed mammograms represent the normal and breast cancer (benign and malignant) cases. The solution applies the deep learning technique in image recognition. To obtain increased accuracy of classification the nonnegative matrix factorization and statistical self-similarity of images are applied. The images reconstructed by using these two approaches enrich the data base and thanks to this improve of quality measures of mammogram recognition (increase of accuracy, sensitivity and specificity). The results of numerical experiments performed on large DDSM data base containing more than 10000 mammograms have confirmed good accuracy of class recognition, exceeding the best results reported in the actual publications for this data base.

  8. Design of UAV high resolution image transmission system

    NASA Astrophysics Data System (ADS)

    Gao, Qiang; Ji, Ming; Pang, Lan; Jiang, Wen-tao; Fan, Pengcheng; Zhang, Xingcheng

    2017-02-01

    In order to solve the problem of the bandwidth limitation of the image transmission system on UAV, a scheme with image compression technology for mini UAV is proposed, based on the requirements of High-definition image transmission system of UAV. The video codec standard H.264 coding module and key technology was analyzed and studied for UAV area video communication. Based on the research of high-resolution image encoding and decoding technique and wireless transmit method, The high-resolution image transmission system was designed on architecture of Android and video codec chip; the constructed system was confirmed by experimentation in laboratory, the bit-rate could be controlled easily, QoS is stable, the low latency could meets most applied requirement not only for military use but also for industrial applications.

  9. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    PubMed

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening.

  10. Development of an embedded instrument for autofocus and polarization alignment of polarization maintaining fiber

    NASA Astrophysics Data System (ADS)

    Feng, Di; Fang, Qimeng; Huang, Huaibo; Zhao, Zhengqi; Song, Ningfang

    2017-12-01

    The development and implementation of a practical instrument based on an embedded technique for autofocus and polarization alignment of polarization maintaining fiber is presented. For focusing efficiency and stability, an image-based focusing algorithm fully considering the image definition evaluation and the focusing search strategy was used to accomplish autofocus. For improving the alignment accuracy, various image-based algorithms of alignment detection were developed with high calculation speed and strong robustness. The instrument can be operated as a standalone device with real-time processing and convenience operations. The hardware construction, software interface, and image-based algorithms of main modules are described. Additionally, several image simulation experiments were also carried out to analyze the accuracy of the above alignment detection algorithms. Both the simulation results and experiment results indicate that the instrument can achieve the accuracy of polarization alignment <±0.1 deg.

  11. Thermal wake/vessel detection technique

    DOEpatents

    Roskovensky, John K [Albuquerque, NM; Nandy, Prabal [Albuquerque, NM; Post, Brian N [Albuquerque, NM

    2012-01-10

    A computer-automated method for detecting a vessel in water based on an image of a portion of Earth includes generating a thermal anomaly mask. The thermal anomaly mask flags each pixel of the image initially deemed to be a wake pixel based on a comparison of a thermal value of each pixel against other thermal values of other pixels localized about each pixel. Contiguous pixels flagged by the thermal anomaly mask are grouped into pixel clusters. A shape of each of the pixel clusters is analyzed to determine whether each of the pixel clusters represents a possible vessel detection event. The possible vessel detection events are represented visually within the image.

  12. Correcting bulk in-plane motion artifacts in MRI using the point spread function.

    PubMed

    Lin, Wei; Wehrli, Felix W; Song, Hee Kwon

    2005-09-01

    A technique is proposed for correcting both translational and rotational motion artifacts in magnetic resonance imaging without the need to collect additional navigator data or to perform intensive postprocessing. The method is based on measuring the point spread function (PSF) by attaching one or two point-sized markers to the main imaging object. Following the isolation of a PSF marker from the acquired image, translational motion could be corrected directly from the modulation transfer function, without the need to determine the object's positions during the scan, although the shifts could be extracted if desired. Rotation is detected by analyzing the relative displacements of two such markers. The technique was evaluated with simulations, phantom and in vivo experiments.

  13. Comparison of ring artifact removal methods using flat panel detector based CT images

    PubMed Central

    2011-01-01

    Background Ring artifacts are the concentric rings superimposed on the tomographic images often caused by the defective and insufficient calibrated detector elements as well as by the damaged scintillator crystals of the flat panel detector. It may be also generated by objects attenuating X-rays very differently in different projection direction. Ring artifact reduction techniques so far reported in the literature can be broadly classified into two groups. One category of the approaches is based on the sinogram processing also known as the pre-processing techniques and the other category of techniques perform processing on the 2-D reconstructed images, recognized as the post-processing techniques in the literature. The strength and weakness of these categories of approaches are yet to be explored from a common platform. Method In this paper, a comparative study of the two categories of ring artifact reduction techniques basically designed for the multi-slice CT instruments is presented from a common platform. For comparison, two representative algorithms from each of the two categories are selected from the published literature. A very recently reported state-of-the-art sinogram domain ring artifact correction method that classifies the ring artifacts according to their strength and then corrects the artifacts using class adaptive correction schemes is also included in this comparative study. The first sinogram domain correction method uses a wavelet based technique to detect the corrupted pixels and then using a simple linear interpolation technique estimates the responses of the bad pixels. The second sinogram based correction method performs all the filtering operations in the transform domain, i.e., in the wavelet and Fourier domain. On the other hand, the two post-processing based correction techniques actually operate on the polar transform domain of the reconstructed CT images. The first method extracts the ring artifact template vector using a homogeneity test and then corrects the CT images by subtracting the artifact template vector from the uncorrected images. The second post-processing based correction technique performs median and mean filtering on the reconstructed images to produce the corrected images. Results The performances of the comparing algorithms have been tested by using both quantitative and perceptual measures. For quantitative analysis, two different numerical performance indices are chosen. On the other hand, different types of artifact patterns, e.g., single/band ring, artifacts from defective and mis-calibrated detector elements, rings in highly structural object and also in hard object, rings from different flat-panel detectors are analyzed to perceptually investigate the strength and weakness of the five methods. An investigation has been also carried out to compare the efficacy of these algorithms in correcting the volume images from a cone beam CT with the parameters determined from one particular slice. Finally, the capability of each correction technique in retaining the image information (e.g., small object at the iso-center) accurately in the corrected CT image has been also tested. Conclusions The results show that the performances of the algorithms are limited and none is fully suitable for correcting different types of ring artifacts without introducing processing distortion to the image structure. To achieve the diagnostic quality of the corrected slices a combination of the two approaches (sinogram- and post-processing) can be used. Also the comparing methods are not suitable for correcting the volume images from a cone beam flat-panel detector based CT. PMID:21846411

  14. Nanophotonic Image Sensors.

    PubMed

    Chen, Qin; Hu, Xin; Wen, Long; Yu, Yan; Cumming, David R S

    2016-09-01

    The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Considerations for opto-mechanical vs. digital stabilization in surveillance systems

    NASA Astrophysics Data System (ADS)

    Kowal, David

    2015-05-01

    Electro-optical surveillance and reconnaissance systems are frequently mounted on unstable or vibrating platforms such as ships, vehicles, aircraft and masts. Mechanical coupling between the platform and the cameras leads to angular vibration of the line of sight. Image motion during detector and eye integration times leads to image smear and a resulting loss of resolution. Additional effects are wavy images for detectors based on a rolling shutter mechanism and annoying movement of the image at low frequencies. A good stabilization system should yield sub-pixel stabilization errors and meet cost and size requirements. There are two main families of LOS stabilization methods: opto-mechanical stabilization and electronic stabilization. Each family, or a combination of both, can be implemented by a number of different techniques of varying complexity, size and cost leading to different levels of stabilization. Opto-mechanical stabilization is typically based on gyro readings, whereas electronic stabilization is typically based on gyro readings or image registration calculations. A few common stabilization techniques, as well as options for different gimbal arrangements will be described and analyzed. The relative merits and drawbacks of the different techniques and their applicability to specific systems and environments will be discussed. Over the years Controp has developed a large number of stabilized electro-optical payloads. A few examples of payloads with unique stabilization mechanisms will be described.

  16. DEMARCATE: Density-based magnetic resonance image clustering for assessing tumor heterogeneity in cancer.

    PubMed

    Saha, Abhijoy; Banerjee, Sayantan; Kurtek, Sebastian; Narang, Shivali; Lee, Joonsang; Rao, Ganesh; Martinez, Juan; Bharath, Karthik; Rao, Arvind U K; Baladandayuthapani, Veerabhadran

    2016-01-01

    Tumor heterogeneity is a crucial area of cancer research wherein inter- and intra-tumor differences are investigated to assess and monitor disease development and progression, especially in cancer. The proliferation of imaging and linked genomic data has enabled us to evaluate tumor heterogeneity on multiple levels. In this work, we examine magnetic resonance imaging (MRI) in patients with brain cancer to assess image-based tumor heterogeneity. Standard approaches to this problem use scalar summary measures (e.g., intensity-based histogram statistics) that do not adequately capture the complete and finer scale information in the voxel-level data. In this paper, we introduce a novel technique, DEMARCATE (DEnsity-based MAgnetic Resonance image Clustering for Assessing Tumor hEterogeneity) to explore the entire tumor heterogeneity density profiles (THDPs) obtained from the full tumor voxel space. THDPs are smoothed representations of the probability density function of the tumor images. We develop tools for analyzing such objects under the Fisher-Rao Riemannian framework that allows us to construct metrics for THDP comparisons across patients, which can be used in conjunction with standard clustering approaches. Our analyses of The Cancer Genome Atlas (TCGA) based Glioblastoma dataset reveal two significant clusters of patients with marked differences in tumor morphology, genomic characteristics and prognostic clinical outcomes. In addition, we see enrichment of image-based clusters with known molecular subtypes of glioblastoma multiforme, which further validates our representation of tumor heterogeneity and subsequent clustering techniques.

  17. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    NASA Astrophysics Data System (ADS)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  18. Automated texture-based identification of ovarian cancer in confocal microendoscope images

    NASA Astrophysics Data System (ADS)

    Srivastava, Saurabh; Rodriguez, Jeffrey J.; Rouse, Andrew R.; Brewer, Molly A.; Gmitro, Arthur F.

    2005-03-01

    The fluorescence confocal microendoscope provides high-resolution, in-vivo imaging of cellular pathology during optical biopsy. There are indications that the examination of human ovaries with this instrument has diagnostic implications for the early detection of ovarian cancer. The purpose of this study was to develop a computer-aided system to facilitate the identification of ovarian cancer from digital images captured with the confocal microendoscope system. To achieve this goal, we modeled the cellular-level structure present in these images as texture and extracted features based on first-order statistics, spatial gray-level dependence matrices, and spatial-frequency content. Selection of the best features for classification was performed using traditional feature selection techniques including stepwise discriminant analysis, forward sequential search, a non-parametric method, principal component analysis, and a heuristic technique that combines the results of these methods. The best set of features selected was used for classification, and performance of various machine classifiers was compared by analyzing the areas under their receiver operating characteristic curves. The results show that it is possible to automatically identify patients with ovarian cancer based on texture features extracted from confocal microendoscope images and that the machine performance is superior to that of the human observer.

  19. Teaching learning based optimization-functional link artificial neural network filter for mixed noise reduction from magnetic resonance image.

    PubMed

    Kumar, M; Mishra, S K

    2017-01-01

    The clinical magnetic resonance imaging (MRI) images may get corrupted due to the presence of the mixture of different types of noises such as Rician, Gaussian, impulse, etc. Most of the available filtering algorithms are noise specific, linear, and non-adaptive. There is a need to develop a nonlinear adaptive filter that adapts itself according to the requirement and effectively applied for suppression of mixed noise from different MRI images. In view of this, a novel nonlinear neural network based adaptive filter i.e. functional link artificial neural network (FLANN) whose weights are trained by a recently developed derivative free meta-heuristic technique i.e. teaching learning based optimization (TLBO) is proposed and implemented. The performance of the proposed filter is compared with five other adaptive filters and analyzed by considering quantitative metrics and evaluating the nonparametric statistical test. The convergence curve and computational time are also included for investigating the efficiency of the proposed as well as competitive filters. The simulation outcomes of proposed filter outperform the other adaptive filters. The proposed filter can be hybridized with other evolutionary technique and utilized for removing different noise and artifacts from others medical images more competently.

  20. Analysis of Vaginal Microbicide Film Hydration Kinetics by Quantitative Imaging Refractometry

    PubMed Central

    Rinehart, Matthew; Grab, Sheila; Rohan, Lisa; Katz, David; Wax, Adam

    2014-01-01

    We have developed a quantitative imaging refractometry technique, based on holographic phase microscopy, as a tool for investigating microscopic structural changes in water-soluble polymeric materials. Here we apply the approach to analyze the structural degradation of vaginal topical microbicide films due to water uptake. We implemented transmission imaging of 1-mm diameter film samples loaded into a flow chamber with a 1.5×2 mm field of view. After water was flooded into the chamber, interference images were captured and analyzed to obtain high resolution maps of the local refractive index and subsequently the volume fraction and mass density of film material at each spatial location. Here, we compare the hydration dynamics of a panel of films with varying thicknesses and polymer compositions, demonstrating that quantitative imaging refractometry can be an effective tool for evaluating and characterizing the performance of candidate microbicide film designs for anti-HIV drug delivery. PMID:24736376

  1. Analysis of vaginal microbicide film hydration kinetics by quantitative imaging refractometry.

    PubMed

    Rinehart, Matthew; Grab, Sheila; Rohan, Lisa; Katz, David; Wax, Adam

    2014-01-01

    We have developed a quantitative imaging refractometry technique, based on holographic phase microscopy, as a tool for investigating microscopic structural changes in water-soluble polymeric materials. Here we apply the approach to analyze the structural degradation of vaginal topical microbicide films due to water uptake. We implemented transmission imaging of 1-mm diameter film samples loaded into a flow chamber with a 1.5×2 mm field of view. After water was flooded into the chamber, interference images were captured and analyzed to obtain high resolution maps of the local refractive index and subsequently the volume fraction and mass density of film material at each spatial location. Here, we compare the hydration dynamics of a panel of films with varying thicknesses and polymer compositions, demonstrating that quantitative imaging refractometry can be an effective tool for evaluating and characterizing the performance of candidate microbicide film designs for anti-HIV drug delivery.

  2. Alternative radiation-free registration technique for image-guided pedicle screw placement in deformed cervico-thoracic segments.

    PubMed

    Kantelhardt, Sven R; Neulen, Axel; Keric, Naureen; Gutenberg, Angelika; Conrad, Jens; Giese, Alf

    2017-10-01

    Image-guided pedicle screw placement in the cervico-thoracic region is a commonly applied technique. In some patients with deformed cervico-thoracic segments, conventional or 3D fluoroscopy based registration of image-guidance might be difficult or impossible because of the anatomic/pathological conditions. Landmark based registration has been used as an alternative, mostly using separate registration of each vertebra. We here investigated a routine for landmark based registration of rigid spinal segments as single objects, using cranial image-guidance software. Landmark based registration of image-guidance was performed using cranial navigation software. After surgical exposure of the spinous processes, lamina and facet joints and fixation of a reference marker array, up to 26 predefined landmarks were acquired using a pointer. All pedicle screws were implanted using image guidance alone. Following image-guided screw placement all patients underwent postoperative CT scanning. Screw positions as well as intraoperative and clinical parameters were retrospectively analyzed. Thirteen patients received 73 pedicle screws at levels C6 to Th8. Registration of spinal segments, using the cranial image-guidance succeeded in all cases. Pedicle perforations were observed in 11.0%, severe perforations of >2 mm occurred in 5.4%. One patient developed a transient C8 syndrome and had to be revised for deviation of the C7 pedicle screw. No other pedicle screw-related complications were observed. In selected patients suffering from pathologies of the cervico-thoracic region, which impair intraoperative fluoroscopy or 3D C-arm imaging, landmark based registration of image-guidance using cranial software is a feasible, radiation-saving and a safe alternative.

  3. Error-proofing test system of industrial components based on image processing

    NASA Astrophysics Data System (ADS)

    Huang, Ying; Huang, Tao

    2018-05-01

    Due to the improvement of modern industrial level and accuracy, conventional manual test fails to satisfy the test standards of enterprises, so digital image processing technique should be utilized to gather and analyze the information on the surface of industrial components, so as to achieve the purpose of test. To test the installation parts of automotive engine, this paper employs camera to capture the images of the components. After these images are preprocessed including denoising, the image processing algorithm relying on flood fill algorithm is used to test the installation of the components. The results prove that this system has very high test accuracy.

  4. Advanced sensor-simulation capability

    NASA Astrophysics Data System (ADS)

    Cota, Stephen A.; Kalman, Linda S.; Keller, Robert A.

    1990-09-01

    This paper provides an overview of an advanced simulation capability currently in use for analyzing visible and infrared sensor systems. The software system, called VISTAS (VISIBLE/INFRARED SENSOR TRADES, ANALYSES, AND SIMULATIONS) combines classical image processing techniques with detailed sensor models to produce static and time dependent simulations of a variety of sensor systems including imaging, tracking, and point target detection systems. Systems modelled to date include space-based scanning line-array sensors as well as staring 2-dimensional array sensors which can be used for either imaging or point source detection.

  5. Enhanced weak-signal sensitivity in two-photon microscopy by adaptive illumination.

    PubMed

    Chu, Kengyeh K; Lim, Daryl; Mertz, Jerome

    2007-10-01

    We describe a technique to enhance both the weak-signal relative sensitivity and the dynamic range of a laser scanning optical microscope. The technique is based on maintaining a fixed detection power by fast feedback control of the illumination power, thereby transferring high measurement resolution to weak signals while virtually eliminating the possibility of image saturation. We analyze and demonstrate the benefits of adaptive illumination in two-photon fluorescence microscopy.

  6. Hemorrhage Detection and Segmentation in Traumatic Pelvic Injuries

    PubMed Central

    Davuluri, Pavani; Wu, Jie; Tang, Yang; Cockrell, Charles H.; Ward, Kevin R.; Najarian, Kayvan; Hargraves, Rosalyn H.

    2012-01-01

    Automated hemorrhage detection and segmentation in traumatic pelvic injuries is vital for fast and accurate treatment decision making. Hemorrhage is the main cause of deaths in patients within first 24 hours after the injury. It is very time consuming for physicians to analyze all Computed Tomography (CT) images manually. As time is crucial in emergence medicine, analyzing medical images manually delays the decision-making process. Automated hemorrhage detection and segmentation can significantly help physicians to analyze these images and make fast and accurate decisions. Hemorrhage segmentation is a crucial step in the accurate diagnosis and treatment decision-making process. This paper presents a novel rule-based hemorrhage segmentation technique that utilizes pelvic anatomical information to segment hemorrhage accurately. An evaluation measure is used to quantify the accuracy of hemorrhage segmentation. The results show that the proposed method is able to segment hemorrhage very well, and the results are promising. PMID:22919433

  7. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  8. Non-invasive imaging of barriers to drug delivery in tumors.

    PubMed

    Hassid, Yaron; Eyal, Erez; Margalit, Raanan; Furman-Haran, Edna; Degani, Hadassa

    2008-08-01

    Solid tumors often develop high interstitial fluid pressure (IFP) as a result of increased water leakage and impaired lymphatic drainage, as well as changes in the extracellular matrix composition and elasticity. This high fluid pressure forms a barrier to drug delivery and hence, resistance to therapy. We have developed techniques based on contrast enhanced magnetic resonance imaging for mapping in tumors the vascular and transport parameters determining the delivery efficiency of blood borne substances. Sequential images are recorded during continuous infusion of a Gd-based contrast agent and analyzed according to a new physiological model, yielding maps of microvascular transfer constants, as well as outward convective interstitial transfer constants and steady state interstitial contrast agent concentrations both reflecting IFP distribution. We further demonstrated in non small cell human lung cancer xenografts the capability of our techniques to monitor in vivo collagenase induced increase in contrast agent delivery as a result of decreased IFP. These techniques can be applied to test drugs that affect angiogenesis and modulate interstitial fluid pressure and has the potential to be extended to cancer patients for assessing resistance to drug delivery.

  9. Non-Invasive Imaging of Barriers to Drug Delivery in Tumors

    PubMed Central

    Hassid, Yaron; Eyal, Erez; Margalit, Raanan; Furman-Haran, Edna; Degani, Hadassa

    2011-01-01

    Solid tumors often develop high interstitial fluid pressure (IFP) as a result of increased water leakage and impaired lymphatic drainage, as well as changes in the extracellular matrix composition and elasticity. This high fluid pressure forms a barrier to drug delivery and hence, resistance to therapy. We have developed techniques based on contrast enhanced magnetic resonance imaging for mapping in tumors the vascular and transport parameters determining the delivery efficiency of blood borne substances. Sequential images are recorded during continuous infusion of a Gd-based contrast agent and analyzed according to a new physiological model, yielding maps of microvascular transfer constants, as well as outward convective interstitial transfer constants and steady state interstitial contrast agent concentrations both reflecting IFP distribution. We further demonstrated in non small cell human lung cancer xenografts the capability of our techniques to monitor in vivo collagenase induced increase in contrast agent delivery as a result of decreased IFP. These techniques can be applied to test drugs that affect angiogenesis and modulate interstitial fluid pressure and has the potential to be extended to cancer patients for assessing resistance to drug delivery. PMID:18638494

  10. Limited-angle tomography for analyzer-based phase-contrast X-ray imaging

    PubMed Central

    Majidi, Keivan; Wernick, Miles N; Li, Jun; Muehleman, Carol; Brankov, Jovan G

    2014-01-01

    Multiple-Image Radiography (MIR) is an analyzer-based phase-contrast X-ray imaging method (ABI), which is emerging as a potential alternative to conventional radiography. MIR simultaneously generates three planar parametric images containing information about scattering, refraction and attenuation properties of the object. The MIR planar images are linear tomographic projections of the corresponding object properties, which allows reconstruction of volumetric images using computed tomography (CT) methods. However, when acquiring a full range of linear projections around the tissue of interest is not feasible or the scanning time is limited, limited-angle tomography techniques can be used to reconstruct these volumetric images near the central plane, which is the plane that contains the pivot point of the tomographic movement. In this work, we use computer simulations to explore the applicability of limited-angle tomography to MIR. We also investigate the accuracy of reconstructions as a function of number of tomographic angles for a fixed total radiation exposure. We use this function to find an optimal range of angles over which data should be acquired for limited-angle tomography MIR (LAT-MIR). Next, we apply the LAT-MIR technique to experimentally acquired MIR projections obtained in a cadaveric human thumb study. We compare the reconstructed slices near the central plane to the same slices reconstructed by CT-MIR using the full angular view around the object. Finally, we perform a task-based evaluation of LAT-MIR performance for different numbers of angular views, and use template matching to detect cartilage in the refraction image near the central plane. We use the signal-to-noise ratio of this test as the detectability metric to investigate an optimum range of tomographic angles for detecting soft tissues in LAT-MIR. Both results show that there is an optimum range of angular view for data acquisition where LAT-MIR yields the best performance, comparable to CT-MIR only if one considers volumetric images near the central plane and not the whole volume. PMID:24898008

  11. Limited-angle tomography for analyzer-based phase-contrast x-ray imaging

    NASA Astrophysics Data System (ADS)

    Majidi, Keivan; Wernick, Miles N.; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-07-01

    Multiple-image radiography (MIR) is an analyzer-based phase-contrast x-ray imaging method, which is emerging as a potential alternative to conventional radiography. MIR simultaneously generates three planar parametric images containing information about scattering, refraction and attenuation properties of the object. The MIR planar images are linear tomographic projections of the corresponding object properties, which allows reconstruction of volumetric images using computed tomography (CT) methods. However, when acquiring a full range of linear projections around the tissue of interest is not feasible or the scanning time is limited, limited-angle tomography techniques can be used to reconstruct these volumetric images near the central plane, which is the plane that contains the pivot point of the tomographic movement. In this work, we use computer simulations to explore the applicability of limited-angle tomography to MIR. We also investigate the accuracy of reconstructions as a function of number of tomographic angles for a fixed total radiation exposure. We use this function to find an optimal range of angles over which data should be acquired for limited-angle tomography MIR (LAT-MIR). Next, we apply the LAT-MIR technique to experimentally acquired MIR projections obtained in a cadaveric human thumb study. We compare the reconstructed slices near the central plane to the same slices reconstructed by CT-MIR using the full angular view around the object. Finally, we perform a task-based evaluation of LAT-MIR performance for different numbers of angular views, and use template matching to detect cartilage in the refraction image near the central plane. We use the signal-to-noise ratio of this test as the detectability metric to investigate an optimum range of tomographic angles for detecting soft tissues in LAT-MIR. Both results show that there is an optimum range of angular view for data acquisition where LAT-MIR yields the best performance, comparable to CT-MIR only if one considers volumetric images near the central plane and not the whole volume.

  12. Imaging Modalities Relevant to Intracranial Pressure Assessment in Astronauts: A Case-Based Discussion

    NASA Technical Reports Server (NTRS)

    Sargsyan, Ashot E.; Kramer, Larry A.; Hamilton, Douglas R.; Hamilton, Douglas R.; Fogarty, Jennifer; Polk, J. D.

    2010-01-01

    Introduction: Intracranial pressure (ICP) elevation has been inferred or documented in a number of space crewmembers. Recent advances in noninvasive imaging technology offer new possibilities for ICP assessment. Most International Space Station (ISS) partner agencies have adopted a battery of occupational health monitoring tests including magnetic resonance imaging (MRI) pre- and postflight, and high-resolution sonography of the orbital structures in all mission phases including during flight. We hypothesize that joint consideration of data from the two techniques has the potential to improve quality and continuity of crewmember monitoring and care. Methods: Specially designed MRI and sonographic protocols were used to image eyes and optic nerves (ON) including the meningeal sheaths. Specific crewmembers multi-modality imaging data were analyzed to identify points of mutual validation as well as unique features of complementary nature. Results and Conclusion: Magnetic resonance imaging (MRI) and high-resolution sonography are both tomographic methods, however images obtained by the two modalities are based on different physical phenomena and use different acquisition principles. Consideration of the images acquired by these two modalities allows cross-validating findings related to the volume and fluid content of the ON subarachnoid space, shape of the globe, and other anatomical features of the orbit. Each of the imaging modalities also has unique advantages, making them complementary techniques.

  13. Self-adaptive relevance feedback based on multilevel image content analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yongying; Zhang, Yujin; Fu, Yu

    2001-01-01

    In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.

  14. Self-adaptive relevance feedback based on multilevel image content analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yongying; Zhang, Yujin; Fu, Yu

    2000-12-01

    In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.

  15. Large-scale retrieval for medical image analytics: A comprehensive review.

    PubMed

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Parallel processing approach to transform-based image coding

    NASA Astrophysics Data System (ADS)

    Normile, James O.; Wright, Dan; Chu, Ken; Yeh, Chia L.

    1991-06-01

    This paper describes a flexible parallel processing architecture designed for use in real time video processing. The system consists of floating point DSP processors connected to each other via fast serial links, each processor has access to a globally shared memory. A multiple bus architecture in combination with a dual ported memory allows communication with a host control processor. The system has been applied to prototyping of video compression and decompression algorithms. The decomposition of transform based algorithms for decompression into a form suitable for parallel processing is described. A technique for automatic load balancing among the processors is developed and discussed, results ar presented with image statistics and data rates. Finally techniques for accelerating the system throughput are analyzed and results from the application of one such modification described.

  17. Evaluation of mathematical algorithms for automatic patient alignment in radiosurgery.

    PubMed

    Williams, Kenneth M; Schulte, Reinhard W; Schubert, Keith E; Wroe, Andrew J

    2015-06-01

    Image registration techniques based on anatomical features can serve to automate patient alignment for intracranial radiosurgery procedures in an effort to improve the accuracy and efficiency of the alignment process as well as potentially eliminate the need for implanted fiducial markers. To explore this option, four two-dimensional (2D) image registration algorithms were analyzed: the phase correlation technique, mutual information (MI) maximization, enhanced correlation coefficient (ECC) maximization, and the iterative closest point (ICP) algorithm. Digitally reconstructed radiographs from the treatment planning computed tomography scan of a human skull were used as the reference images, while orthogonal digital x-ray images taken in the treatment room were used as the captured images to be aligned. The accuracy of aligning the skull with each algorithm was compared to the alignment of the currently practiced procedure, which is based on a manual process of selecting common landmarks, including implanted fiducials and anatomical skull features. Of the four algorithms, three (phase correlation, MI maximization, and ECC maximization) demonstrated clinically adequate (ie, comparable to the standard alignment technique) translational accuracy and improvements in speed compared to the interactive, user-guided technique; however, the ICP algorithm failed to give clinically acceptable results. The results of this work suggest that a combination of different algorithms may provide the best registration results. This research serves as the initial groundwork for the translation of automated, anatomy-based 2D algorithms into a real-world system for 2D-to-2D image registration and alignment for intracranial radiosurgery. This may obviate the need for invasive implantation of fiducial markers into the skull and may improve treatment room efficiency and accuracy. © The Author(s) 2014.

  18. Automatic Extraction of Planetary Image Features

    NASA Technical Reports Server (NTRS)

    Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.

    2009-01-01

    With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.

  19. TU-CD-304-11: Veritas 2.0: A Cloud-Based Tool to Facilitate Research and Innovation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, P; Patankar, A; Etmektzoglou, A

    Purpose: We introduce Veritas 2.0, a cloud-based, non-clinical research portal, to facilitate translation of radiotherapy research ideas to new delivery techniques. The ecosystem of research tools includes web apps for a research beam builder for TrueBeam Developer Mode, an image reader for compressed and uncompressed XIM files, and a trajectory log file based QA/beam delivery analyzer. Methods: The research beam builder can generate TrueBeam readable XML file either from scratch or from pre-existing DICOM-RT plans. DICOM-RT plan is first converted to XML format and then researcher can interactively modify or add control points to them. Delivered beam can be verifiedmore » via reading generated images and analyzing trajectory log files. Image reader can read both uncompressed and HND-compressed XIM images. The trajectory log analyzer lets researchers plot expected vs. actual values and deviations among 30 mechanical axes. The analyzer gives an animated view of MLC patterns for the beam delivery. Veritas 2.0 is freely available and its advantages versus standalone software are i) No software installation or maintenance needed, ii) easy accessibility across all devices iii) seamless upgrades and iv) OS independence. Veritas is written using open-source tools like twitter bootstrap, jQuery, flask, and Python-based modules. Results: In the first experiment, an anonymized 7-beam DICOM-RT IMRT plan was converted to XML beam containing 1400 control points. kV and MV imaging points were inserted into this XML beam. In another experiment, a binary log file was analyzed to compare actual vs expected values and deviations among axes. Conclusions: Veritas 2.0 is a public cloud-based web app that hosts a pool of research tools for facilitating research from conceptualization to verification. It is aimed at providing a platform for facilitating research and collaboration. I am full time employee at Varian Medical systems, Palo Alto.« less

  20. Fusion and quality analysis for remote sensing images using contourlet transform

    NASA Astrophysics Data System (ADS)

    Choi, Yoonsuk; Sharifahmadian, Ershad; Latifi, Shahram

    2013-05-01

    Recent developments in remote sensing technologies have provided various images with high spatial and spectral resolutions. However, multispectral images have low spatial resolution and panchromatic images have low spectral resolution. Therefore, image fusion techniques are necessary to improve the spatial resolution of spectral images by injecting spatial details of high-resolution panchromatic images. The objective of image fusion is to provide useful information by improving the spatial resolution and the spectral information of the original images. The fusion results can be utilized in various applications, such as military, medical imaging, and remote sensing. This paper addresses two issues in image fusion: i) image fusion method and ii) quality analysis of fusion results. First, a new contourlet-based image fusion method is presented, which is an improvement over the wavelet-based fusion. This fusion method is then applied to a case study to demonstrate its fusion performance. Fusion framework and scheme used in the study are discussed in detail. Second, quality analysis for the fusion results is discussed. We employed various quality metrics in order to analyze the fusion results both spatially and spectrally. Our results indicate that the proposed contourlet-based fusion method performs better than the conventional wavelet-based fusion methods.

  1. Hyperspectral techniques in analysis of oral dosage forms.

    PubMed

    Hamilton, Sara J; Lowell, Amanda E; Lodder, Robert A

    2002-10-01

    Pharmaceutical oral dosage forms are used in this paper to test the sensitivity and spatial resolution of hyperspectral imaging instruments. The first experiment tested the hypothesis that a near-infrared (IR) tunable diode-based remote sensing system is capable of monitoring degradation of hard gelatin capsules at a relatively long distance (0.5 km). Spectra from the capsules were used to differentiate among capsules exposed to an atmosphere containing 150 ppb formaldehyde for 0, 2, 4, and 8 h. Robust median-based principal component regression with Bayesian inference was employed for outlier detection. The second experiment tested the hypothesis that near-IR imaging spectrometry of tablets permits the identification and composition of multiple individual tablets to be determined simultaneously. A near-IR camera was used to collect thousands of spectra simultaneously from a field of blister-packaged tablets. The number of tablets that a typical near-IR camera can currently analyze simultaneously was estimated to be approximately 1300. The bootstrap error-adjusted single-sample technique chemometric-imaging algorithm was used to draw probability-density contour plots that revealed tablet composition. The single-capsule analysis provides an indication of how far apart the sample and instrumentation can be and still maintain adequate signal-to-noise ratio (S/N), while the multiple-tablet imaging experiment gives an indication of how many samples can be analyzed simultaneously while maintaining an adequate S/N and pixel coverage on each sample.

  2. Development of a novel 2D color map for interactive segmentation of histological images.

    PubMed

    Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D

    2012-05-01

    We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

  3. Scattering features for lung cancer detection in fibered confocal fluorescence microscopy images.

    PubMed

    Rakotomamonjy, Alain; Petitjean, Caroline; Salaün, Mathieu; Thiberville, Luc

    2014-06-01

    To assess the feasibility of lung cancer diagnosis using fibered confocal fluorescence microscopy (FCFM) imaging technique and scattering features for pattern recognition. FCFM imaging technique is a new medical imaging technique for which interest has yet to be established for diagnosis. This paper addresses the problem of lung cancer detection using FCFM images and, as a first contribution, assesses the feasibility of computer-aided diagnosis through these images. Towards this aim, we have built a pattern recognition scheme which involves a feature extraction stage and a classification stage. The second contribution relies on the features used for discrimination. Indeed, we have employed the so-called scattering transform for extracting discriminative features, which are robust to small deformations in the images. We have also compared and combined these features with classical yet powerful features like local binary patterns (LBP) and their variants denoted as local quinary patterns (LQP). We show that scattering features yielded to better recognition performances than classical features like LBP and their LQP variants for the FCFM image classification problems. Another finding is that LBP-based and scattering-based features provide complementary discriminative information and, in some situations, we empirically establish that performance can be improved when jointly using LBP, LQP and scattering features. In this work we analyze the joint capability of FCFM images and scattering features for lung cancer diagnosis. The proposed method achieves a good recognition rate for such a diagnosis problem. It also performs well when used in conjunction with other features for other classical medical imaging classification problems. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Hand-Based Biometric Analysis

    NASA Technical Reports Server (NTRS)

    Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)

    2015-01-01

    Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  5. Hand-Based Biometric Analysis

    NASA Technical Reports Server (NTRS)

    Bebis, George

    2013-01-01

    Hand-based biometric analysis systems and techniques provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an input image. Additionally, the analysis uses re-use of commonly seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  6. A novel method for detecting light source for digital images forensic

    NASA Astrophysics Data System (ADS)

    Roy, A. K.; Mitra, S. K.; Agrawal, R.

    2011-06-01

    Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.

  7. A multimodal biometric authentication system based on 2D and 3D palmprint features

    NASA Astrophysics Data System (ADS)

    Aggithaya, Vivek K.; Zhang, David; Luo, Nan

    2008-03-01

    This paper presents a new personal authentication system that simultaneously exploits 2D and 3D palmprint features. Here, we aim to improve the accuracy and robustness of existing palmprint authentication systems using 3D palmprint features. The proposed system uses an active stereo technique, structured light, to capture 3D image or range data of the palm and a registered intensity image simultaneously. The surface curvature based method is employed to extract features from 3D palmprint and Gabor feature based competitive coding scheme is used for 2D representation. We individually analyze these representations and attempt to combine them with score level fusion technique. Our experiments on a database of 108 subjects achieve significant improvement in performance (Equal Error Rate) with the integration of 3D features as compared to the case when 2D palmprint features alone are employed.

  8. Histogram analysis for smartphone-based rapid hematocrit determination

    PubMed Central

    Jalal, Uddin M.; Kim, Sang C.; Shim, Joon S.

    2017-01-01

    A novel and rapid analysis technique using histogram has been proposed for the colorimetric quantification of blood hematocrits. A smartphone-based “Histogram” app for the detection of hematocrits has been developed integrating the smartphone embedded camera with a microfluidic chip via a custom-made optical platform. The developed histogram analysis shows its effectiveness in the automatic detection of sample channel including auto-calibration and can analyze the single-channel as well as multi-channel images. Furthermore, the analyzing method is advantageous to the quantification of blood-hematocrit both in the equal and varying optical conditions. The rapid determination of blood hematocrits carries enormous information regarding physiological disorders, and the use of such reproducible, cost-effective, and standard techniques may effectively help with the diagnosis and prevention of a number of human diseases. PMID:28717569

  9. Smart point-of-care systems for molecular diagnostics based on nanotechnology: whole blood glucose analysis

    NASA Astrophysics Data System (ADS)

    Devadhasan, Jasmine P.; Kim, Sanghyo

    2015-07-01

    Complementary metal oxide semiconductor (CMOS) image sensors are received great attention for their high efficiency in biological applications. The present work describes a CMOS image sensor-based whole blood glucose monitoring system through a point-of-care (POC) approach. A simple poly-ethylene terephthalate (PET) film chip was developed to carry out the enzyme kinetic reaction at various concentrations of blood glucose. In this technique, assay reagent was adsorbed onto amine functionalized silica (AFSiO2) nanoparticles in order to achieve glucose oxidation on the PET film chip. The AFSiO2 nanoparticles can immobilize the assay reagent with an electrostatic attraction and eased to develop the opaque platform which was technically suitable chip to analyze by the camera module. The oxidized glucose then produces a green color according to the glucose concentration and is analyzed by the camera module as a photon detection technique. The photon number decreases with increasing glucose concentration. The simple sensing approach, utilizing enzyme immobilized AFSiO2 nanoparticle chip and assay detection method was developed for quantitative glucose measurement.

  10. Image-based ELISA on an activated polypropylene microtest plate--a spectrophotometer-free low cost assay technique.

    PubMed

    Parween, Shahila; Nahar, Pradip

    2013-10-15

    In this communication, we report ELISA technique on an activated polypropylene microtest plate (APPµTP) as an illustrative example of a low cost diagnostic assay. Activated test zone in APPµTP binds a capture biomolecule through covalent linkage thereby, eliminating non-specific binding often prevalent in absorption based techniques. Efficacy of APPµTP is demonstrated by detecting human immunoglobulin G (IgG), human immunoglobulin E (IgE) and Aspergillus fumigatus antibody in patient's sera. Detection is done by taking the image of the assay solution by a desktop scanner and analyzing the color of the image. Human IgE quantification by color saturation in the image-based assay shows excellent correlation with absorbance-based assay (Pearson correlation coefficient, r=0.992). Significance of the relationship is seen from its p value which is 4.087e-11. Performance of APPµTP is also checked with respect to microtiter plate and paper-based ELISA. APPµTP can quantify an analyte as precisely as in microtiter plate with insignificant non-specific binding, a necessary prerequisite for ELISA assay. In contrast, paper-ELISA shows high non-specific binding in control sera (false positive). Finally, we have carried out ELISA steps on APPµTP by ultrasound waves on a sonicator bath and the results show that even in 8 min, it can convincingly differentiate a test sample from a control sample. In short, spectrophotometer-free image-based miniaturized ELISA on APPµTP is precise, reliable, rapid, and sensitive and could be a good substitute for conventional immunoassay procedures widely used in clinical and research laboratories. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Temporal Fourier analysis applied to equilibrium radionuclide cineangiography. Importance in the study of global and regional left ventricular wall motion.

    PubMed

    Cardot, J C; Berthout, P; Verdenet, J; Bidet, A; Faivre, R; Bassand, J P; Bidet, R; Maurat, J P

    1982-01-01

    Regional and global left ventricular wall motion was assessed in 120 patients using radionuclide cineangiography (RCA) and contrast angiography. Functional imaging procedures based on a temporal Fourier analysis of dynamic image sequences were applied to the study of cardiac contractility. Two images were constructed by taking the phase and amplitude values of the first harmonic in the Fourier transform for each pixel. These two images aided in determining the perimeter of the left ventricle to calculate the global ejection fraction. Regional left ventricular wall motion was studied by analyzing the phase value and by examining the distribution histogram of these values. The accuracy of global ejection fraction calculation was improved by the Fourier technique. This technique increased the sensitivity of RCA for determining segmental abnormalities especially in the left anterior oblique view (LAO).

  12. Three-dimensional kinematic estimation of mobile-bearing total knee arthroplasty from x-ray fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Yamazaki, Takaharu; Futai, Kazuma; Tomita, Tetsuya; Sato, Yoshinobu; Yoshikawa, Hideki; Tamura, Shinichi; Sugamoto, Kazuomi

    2011-03-01

    To achieve 3D kinematic analysis of total knee arthroplasty (TKA), 2D/3D registration techniques, which use X-ray fluoroscopic images and computer-aided design (CAD) model of the knee implant, have attracted attention in recent years. These techniques could provide information regarding the movement of radiopaque femoral and tibial components but could not provide information of radiolucent polyethylene insert, because the insert silhouette on X-ray image did not appear clearly. Therefore, it was difficult to obtain 3D kinemaitcs of polyethylene insert, particularly mobile-bearing insert that move on the tibial component. This study presents a technique and the accuracy for 3D kinematic analysis of mobile-bearing insert in TKA using X-ray fluoroscopy, and finally performs clinical applications. For a 3D pose estimation technique of the mobile-bearing insert in TKA using X-ray fluoroscopy, tantalum beads and CAD model with its beads are utilized, and the 3D pose of the insert model is estimated using a feature-based 2D/3D registration technique. In order to validate the accuracy of the present technique, experiments including computer simulation test were performed. The results showed the pose estimation accuracy was sufficient for analyzing mobile-bearing TKA kinematics (the RMS error: about 1.0 mm, 1.0 degree). In the clinical applications, seven patients with mobile-bearing TKA in deep knee bending motion were studied and analyzed. Consequently, present technique enables us to better understand mobile-bearing TKA kinematics, and this type of evaluation was thought to be helpful for improving implant design and optimizing TKA surgical techniques.

  13. Hardware implementation of hierarchical volume subdivision-based elastic registration.

    PubMed

    Dandekar, Omkar; Walimbe, Vivek; Shekhar, Raj

    2006-01-01

    Real-time, elastic and fully automated 3D image registration is critical to the efficiency and effectiveness of many image-guided diagnostic and treatment procedures relying on multimodality image fusion or serial image comparison. True, real-time performance will make many 3D image registration-based techniques clinically viable. Hierarchical volume subdivision-based image registration techniques are inherently faster than most elastic registration techniques, e.g. free-form deformation (FFD)-based techniques, and are more amenable for achieving real-time performance through hardware acceleration. Our group has previously reported an FPGA-based architecture for accelerating FFD-based image registration. In this article we show how our existing architecture can be adapted to support hierarchical volume subdivision-based image registration. A proof-of-concept implementation of the architecture achieved speedups of 100 for elastic registration against an optimized software implementation on a 3.2 GHz Pentium III Xeon workstation. Due to inherent parallel nature of the hierarchical volume subdivision-based image registration techniques further speedup can be achieved by using several computing modules in parallel.

  14. Vision based nutrient deficiency classification in maize plants using multi class support vector machines

    NASA Astrophysics Data System (ADS)

    Leena, N.; Saju, K. K.

    2018-04-01

    Nutritional deficiencies in plants are a major concern for farmers as it affects productivity and thus profit. The work aims to classify nutritional deficiencies in maize plant in a non-destructive mannerusing image processing and machine learning techniques. The colored images of the leaves are analyzed and classified with multi-class support vector machine (SVM) method. Several images of maize leaves with known deficiencies like nitrogen, phosphorous and potassium (NPK) are used to train the SVM classifier prior to the classification of test images. The results show that the method was able to classify and identify nutritional deficiencies.

  15. Study on key techniques for camera-based hydrological record image digitization

    NASA Astrophysics Data System (ADS)

    Li, Shijin; Zhan, Di; Hu, Jinlong; Gao, Xiangtao; Bo, Ping

    2015-10-01

    With the development of information technology, the digitization of scientific or engineering drawings has received more and more attention. In hydrology, meteorology, medicine and mining industry, the grid drawing sheet is commonly used to record the observations from sensors. However, these paper drawings may be destroyed and contaminated due to improper preservation or overuse. Further, it will be a heavy workload and prone to error if these data are manually transcripted into the computer. Hence, in order to digitize these drawings, establishing the corresponding data base will ensure the integrity of data and provide invaluable information for further research. This paper presents an automatic system for hydrological record image digitization, which consists of three key techniques, i.e., image segmentation, intersection point localization and distortion rectification. First, a novel approach to the binarization of the curves and grids in the water level sheet image has been proposed, which is based on the fusion of gradient and color information adaptively. Second, a fast search strategy for cross point location is invented and point-by-point processing is thus avoided, with the help of grid distribution information. And finally, we put forward a local rectification method through analyzing the central portions of the image and utilizing the domain knowledge of hydrology. The processing speed is accelerated, while the accuracy is still satisfying. Experiments on several real water level records show that our proposed techniques are effective and capable of recovering the hydrological observations accurately.

  16. Methods for analyzing optical observations of tsunami-induced signatures in airglow emissions from ground-based and space-based platforms

    NASA Astrophysics Data System (ADS)

    Grawe, M.; Makela, J. J.

    2016-12-01

    Airglow imaging of the 630.0-nm redline emission has emerged as a useful tool for studying the properties of tsunami-ionospheric coupling in recent years, offering spatially continuous coverage of the sky with a single instrument. Past studies have shown that airglow signatures induced by tsunamis are inherently anisotropic due to the observation geometry and effects from the geomagnetic field. Here, we present details behind the techniques used to determine the parameters of the signature (orientation, wavelength, etc) with potential extensions to real or quasi-real time and a tool for interpreting the location and strength of the signatures in the field of view. We demonstrate application of the techniques to ground-based optical measurements of several tsunami-induced signatures taking place over the past five years from an imaging system in Hawaii. Additionally, these methods are extended for use on space-based observation platforms, offering advantages over ground-based installations.

  17. Coherent X-ray diffraction imaging of nanoengineered polymeric capsules

    NASA Astrophysics Data System (ADS)

    Erokhina, S.; Pastorino, L.; Di Lisa, D.; Kiiamov, A. G.; Faizullina, A. R.; Tayurskii, D. A.; Iannotta, S.; Erokhin, V.

    2017-10-01

    For the first time, nanoengineered polymeric capsules and their architecture have been studied with coherent X-ray diffraction imaging technique. The use of coherent X-ray diffraction imaging technique allowed us to analyze the samples immersed in a liquid. We report about the significant difference between polymeric capsule architectures under dry and liquid conditions.

  18. Coma measurement by transmission image sensor with a PSM

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Wang, Xiangzhao; Ma, Mingying; Zhang, Dongqing; Shi, Weijie; Hu, Jianming

    2005-01-01

    As feature size decreases, especially with the use of resolution enhancement technique such as off axis illumination and phase shifting mask, fast and accurate in-situ measurement of coma has become very important in improving the performance of modern lithographic tools. The measurement of coma can be achieved by the transmission image sensor, which is an aerial image measurement device. The coma can be determined by measuring the positions of the aerial image at multiple illumination settings. In the present paper, we improve the measurement accuracy of the above technique with an alternating phase shifting mask. Using the scalar diffraction theory, we analyze the effect of coma on the aerial image. To analyze the effect of the alternating phase shifting mask, we compare the pupil filling of the mark used in the above technique with that of the phase-shifted mark used in the new technique. We calculate the coma-induced image displacements of the marks at multiple partial coherence and NA settings, using the PROLITH simulation program. The simulation results show that the accuracy of coma measurement can increase approximately 20 percent using the alternating phase shifting mask.

  19. Automated thermal mapping techniques using chromatic image analysis

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  20. Oriented modulation for watermarking in direct binary search halftone images.

    PubMed

    Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der

    2012-09-01

    In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.

  1. Sonification of optical coherence tomography data and images

    PubMed Central

    Ahmad, Adeel; Adie, Steven G.; Wang, Morgan; Boppart, Stephen A.

    2010-01-01

    Sonification is the process of representing data as non-speech audio signals. In this manuscript, we describe the auditory presentation of OCT data and images. OCT acquisition rates frequently exceed our ability to visually analyze image-based data, and multi-sensory input may therefore facilitate rapid interpretation. This conversion will be especially valuable in time-sensitive surgical or diagnostic procedures. In these scenarios, auditory feedback can complement visual data without requiring the surgeon to constantly monitor the screen, or provide additional feedback in non-imaging procedures such as guided needle biopsies which use only axial-scan data. In this paper we present techniques to translate OCT data and images into sound based on the spatial and spatial frequency properties of the OCT data. Results obtained from parameter-mapped sonification of human adipose and tumor tissues are presented, indicating that audio feedback of OCT data may be useful for the interpretation of OCT images. PMID:20588846

  2. Predicting neuropathic ulceration: analysis of static temperature distributions in thermal images

    NASA Astrophysics Data System (ADS)

    Kaabouch, Naima; Hu, Wen-Chen; Chen, Yi; Anderson, Julie W.; Ames, Forrest; Paulson, Rolf

    2010-11-01

    Foot ulcers affect millions of Americans annually. Conventional methods used to assess skin integrity, including inspection and palpation, may be valuable approaches, but they usually do not detect changes in skin integrity until an ulcer has already developed. We analyze the feasibility of thermal imaging as a technique to assess the integrity of the skin and its many layers. Thermal images are analyzed using an asymmetry analysis, combined with a genetic algorithm, to examine the infrared images for early detection of foot ulcers. Preliminary results show that the proposed technique can reliably and efficiently detect inflammation and hence effectively predict potential ulceration.

  3. Effects of illumination differences on photometric stereo shape-and-albedo-from-shading for precision lunar surface reconstruction

    NASA Astrophysics Data System (ADS)

    Chung Liu, Wai; Wu, Bo; Wöhler, Christian

    2018-02-01

    Photoclinometric surface reconstruction techniques such as Shape-from-Shading (SfS) and Shape-and-Albedo-from-Shading (SAfS) retrieve topographic information of a surface on the basis of the reflectance information embedded in the image intensity of each pixel. SfS or SAfS techniques have been utilized to generate pixel-resolution digital elevation models (DEMs) of the Moon and other planetary bodies. Photometric stereo SAfS analyzes images under multiple illumination conditions to improve the robustness of reconstruction. In this case, the directional difference in illumination between the images is likely to affect the quality of the reconstruction result. In this study, we quantitatively investigate the effects of illumination differences on photometric stereo SAfS. Firstly, an algorithm for photometric stereo SAfS is developed, and then, an error model is derived to analyze the relationships between the azimuthal and zenith angles of illumination of the images and the reconstruction qualities. The developed algorithm and error model were verified with high-resolution images collected by the Narrow Angle Camera (NAC) of the Lunar Reconnaissance Orbiter Camera (LROC). Experimental analyses reveal that (1) the resulting error in photometric stereo SAfS depends on both the azimuthal and the zenith angles of illumination as well as the general intensity of the images and (2) the predictions from the proposed error model are consistent with the actual slope errors obtained by photometric stereo SAfS using the LROC NAC images. The proposed error model enriches the theory of photometric stereo SAfS and is of significance for optimized lunar surface reconstruction based on SAfS techniques.

  4. Analysis of neoplastic lesions in magnetic resonance imaging using self-organizing maps.

    PubMed

    Mei, Paulo Afonso; de Carvalho Carneiro, Cleyton; Fraser, Stephen J; Min, Li Li; Reis, Fabiano

    2015-12-15

    To provide an improved method for the identification and analysis of brain tumors in MRI scans using a semi-automated computational approach, that has the potential to provide a more objective, precise and quantitatively rigorous analysis, compared to human visual analysis. Self-Organizing Maps (SOM) is an unsupervised, exploratory data analysis tool, which can automatically domain an image into selfsimilar regions or clusters, based on measures of similarity. It can be used to perform image-domain of brain tissue on MR images, without prior knowledge. We used SOM to analyze T1, T2 and FLAIR acquisitions from two MRI machines in our service from 14 patients with brain tumors confirmed by biopsies--three lymphomas, six glioblastomas, one meningioma, one ganglioglioma, two oligoastrocytomas and one astrocytoma. The SOM software was used to analyze the data from the three image acquisitions from each patient and generated a self-organized map for each containing 25 clusters. Damaged tissue was separated from the normal tissue using the SOM technique. Furthermore, in some cases it allowed to separate different areas from within the tumor--like edema/peritumoral infiltration and necrosis. In lesions with less precise boundaries in FLAIR, the estimated damaged tissue area in the resulting map appears bigger. Our results showed that SOM has the potential to be a powerful MR imaging analysis technique for the assessment of brain tumors. Copyright © 2015. Published by Elsevier B.V.

  5. An intelligent framework for medical image retrieval using MDCT and multi SVM.

    PubMed

    Balan, J A Alex Rajju; Rajan, S Edward

    2014-01-01

    Volumes of medical images are rapidly generated in medical field and to manage them effectively has become a great challenge. This paper studies the development of innovative medical image retrieval based on texture features and accuracy. The objective of the paper is to analyze the image retrieval based on diagnosis of healthcare management systems. This paper traces the development of innovative medical image retrieval to estimate both the image texture features and accuracy. The texture features of medical images are extracted using MDCT and multi SVM. Both the theoretical approach and the simulation results revealed interesting observations and they were corroborated using MDCT coefficients and SVM methodology. All attempts to extract the data about the image in response to the query has been computed successfully and perfect image retrieval performance has been obtained. Experimental results on a database of 100 trademark medical images show that an integrated texture feature representation results in 98% of the images being retrieved using MDCT and multi SVM. Thus we have studied a multiclassification technique based on SVM which is prior suitable for medical images. The results show the retrieval accuracy of 98%, 99% for different sets of medical images with respect to the class of image.

  6. [New methods for the evaluation of bone quality. Assessment of bone structural property using imaging.

    PubMed

    Ito, Masako

    Structural property of bone includes micro- or nano-structural property of the trabecular and cortical bone, and macroscopic geometry. Radiological technique is useful to analyze the bone structural property;multi-detector row CT(MDCT)or high-resolution peripheral QCT(HR-pQCT)is available to analyze human bone in vivo . For the analysis of hip geometry, CT-based hip structure analysis(HSA)is available as well as DXA-based HSA. These structural parameters are related to biomechanical property, and these assessment tools provide information of pathological changes or the effects of anti-osteoporotic agents on bone.

  7. A novel content-based active contour model for brain tumor segmentation.

    PubMed

    Sachdeva, Jainy; Kumar, Vinod; Gupta, Indra; Khandelwal, Niranjan; Ahuja, Chirag Kamal

    2012-06-01

    Brain tumor segmentation is a crucial step in surgical and treatment planning. Intensity-based active contour models such as gradient vector flow (GVF), magneto static active contour (MAC) and fluid vector flow (FVF) have been proposed to segment homogeneous objects/tumors in medical images. In this study, extensive experiments are done to analyze the performance of intensity-based techniques for homogeneous tumors on brain magnetic resonance (MR) images. The analysis shows that the state-of-art methods fail to segment homogeneous tumors against similar background or when these tumors show partial diversity toward the background. They also have preconvergence problem in case of false edges/saddle points. However, the presence of weak edges and diffused edges (due to edema around the tumor) leads to oversegmentation by intensity-based techniques. Therefore, the proposed method content-based active contour (CBAC) uses both intensity and texture information present within the active contour to overcome above-stated problems capturing large range in an image. It also proposes a novel use of Gray-Level Co-occurrence Matrix to define texture space for tumor segmentation. The effectiveness of this method is tested on two different real data sets (55 patients - more than 600 images) containing five different types of homogeneous, heterogeneous, diffused tumors and synthetic images (non-MR benchmark images). Remarkable results are obtained in segmenting homogeneous tumors of uniform intensity, complex content heterogeneous, diffused tumors on MR images (T1-weighted, postcontrast T1-weighted and T2-weighted) and synthetic images (non-MR benchmark images of varying intensity, texture, noise content and false edges). Further, tumor volume is efficiently extracted from 2-dimensional slices and is named as 2.5-dimensional segmentation. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Quantitative elemental imaging of heterogeneous catalysts using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Trichard, F.; Sorbier, L.; Moncayo, S.; Blouët, Y.; Lienemann, C.-P.; Motto-Ros, V.

    2017-07-01

    Currently, the use of catalysis is widespread in almost all industrial processes; its use improves productivity, synthesis yields and waste treatment as well as decreases energy costs. The increasingly stringent requirements, in terms of reaction selectivity and environmental standards, impose progressively increasing accuracy and control of operations. Meanwhile, the development of characterization techniques has been challenging, and the techniques often require equipment with high complexity. In this paper, we demonstrate a novel elemental approach for performing quantitative space-resolved analysis with ppm-scale quantification limits and μm-scale resolution. This approach, based on laser-induced breakdown spectroscopy (LIBS), is distinguished by its simplicity, all-optical design, and speed of operation. This work analyzes palladium-based porous alumina catalysts, which are commonly used in the selective hydrogenation process, using the LIBS method. We report an exhaustive study of the quantification capability of LIBS and its ability to perform imaging measurements over a large dynamic range, typically from a few ppm to wt%. These results offer new insight into the use of LIBS-based imaging in the industry and paves the way for innumerable applications.

  9. Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis.

    PubMed

    Myburgh, Hermanus C; van Zijl, Willemien H; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude

    2016-03-01

    Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations.

  10. Otitis Media Diagnosis for Developing Countries Using Tympanic Membrane Image-Analysis

    PubMed Central

    Myburgh, Hermanus C.; van Zijl, Willemien H.; Swanepoel, DeWet; Hellström, Sten; Laurent, Claude

    2016-01-01

    Background Otitis media is one of the most common childhood diseases worldwide, but because of lack of doctors and health personnel in developing countries it is often misdiagnosed or not diagnosed at all. This may lead to serious, and life-threatening complications. There is, thus a need for an automated computer based image-analyzing system that could assist in making accurate otitis media diagnoses anywhere. Methods A method for automated diagnosis of otitis media is proposed. The method uses image-processing techniques to classify otitis media. The system is trained using high quality pre-assessed images of tympanic membranes, captured by digital video-otoscopes, and classifies undiagnosed images into five otitis media categories based on predefined signs. Several verification tests analyzed the classification capability of the method. Findings An accuracy of 80.6% was achieved for images taken with commercial video-otoscopes, while an accuracy of 78.7% was achieved for images captured on-site with a low cost custom-made video-otoscope. Interpretation The high accuracy of the proposed otitis media classification system compares well with the classification accuracy of general practitioners and pediatricians (~ 64% to 80%) using traditional otoscopes, and therefore holds promise for the future in making automated diagnosis of otitis media in medically underserved populations. PMID:27077122

  11. Experimental research of digital holographic microscopic measuring

    NASA Astrophysics Data System (ADS)

    Zhu, Xueliang; Chen, Feifei; Li, Jicheng

    2013-06-01

    Digital holography is a new imaging technique, which is developed on the base of optical holography, Digital processing, and Computer techniques. It is using CCD instead of the conventional silver to record hologram, and then reproducing the 3D contour of the object by the way of computer simulation. Compared with the traditional optical holographic, the whole process is of simple measuring, lower production cost, faster the imaging speed, and with the advantages of non-contact real-time measurement. At present, it can be used in the fields of the morphology detection of tiny objects, micro deformation analysis, and biological cells shape measurement. It is one of the research hot spot at home and abroad. This paper introduced the basic principles and relevant theories about the optical holography and Digital holography, and researched the basic questions which influence the reproduce images in the process of recording and reconstructing of the digital holographic microcopy. In order to get a clear digital hologram, by analyzing the optical system structure, we discussed the recording distance and of the hologram. On the base of the theoretical studies, we established a measurement and analyzed the experimental conditions, then adjusted them to the system. To achieve a precise measurement of tiny object in three-dimension, we measured MEMS micro device for example, and obtained the reproduction three-dimensional contour, realized the three dimensional profile measurement of tiny object. According to the experiment results consider: analysis the reference factors between the zero-order term and a pair of twin-images by the choice of the object light and the reference light and the distance of the recording and reconstructing and the characteristics of reconstruction light on the measurement, the measurement errors were analyzed. The research result shows that the device owns certain reliability.

  12. Visibility enhancement of color images using Type-II fuzzy membership function

    NASA Astrophysics Data System (ADS)

    Singh, Harmandeep; Khehra, Baljit Singh

    2018-04-01

    Images taken in poor environmental conditions decrease the visibility and hidden information of digital images. Therefore, image enhancement techniques are necessary for improving the significant details of these images. An extensive review has shown that histogram-based enhancement techniques greatly suffer from over/under enhancement issues. Fuzzy-based enhancement techniques suffer from over/under saturated pixels problems. In this paper, a novel Type-II fuzzy-based image enhancement technique has been proposed for improving the visibility of images. The Type-II fuzzy logic can automatically extract the local atmospheric light and roughly eliminate the atmospheric veil in local detail enhancement. The proposed technique has been evaluated on 10 well-known weather degraded color images and is also compared with four well-known existing image enhancement techniques. The experimental results reveal that the proposed technique outperforms others regarding visible edge ratio, color gradients and number of saturated pixels.

  13. Improving the segmentation of therapy-induced leukoencephalopathy using apriori information and a gradient magnitude threshold

    NASA Astrophysics Data System (ADS)

    Glass, John O.; Reddick, Wilburn E.; Reeves, Cara; Pui, Ching-Hon

    2004-05-01

    Reliably quantifying therapy-induced leukoencephalopathy in children treated for cancer is a challenging task due to its varying MR properties and similarity to normal tissues and imaging artifacts. T1, T2, PD, and FLAIR images were analyzed for a subset of 15 children from an institutional protocol for the treatment of acute lymphoblastic leukemia. Three different analysis techniques were compared to examine improvements in the segmentation accuracy of leukoencephalopathy versus manual tracings by two expert observers. The first technique utilized no apriori information and a white matter mask based on the segmentation of the first serial examination of each patient. MR images were then segmented with a Kohonen Self-Organizing Map. The other two techniques combine apriori maps from the ICBM atlas spatially normalized to each patient and resliced using SPM99 software. The apriori maps were included as input and a gradient magnitude threshold calculated on the FLAIR images was also utilized. The second technique used a 2-dimensional threshold, while the third algorithm utilized a 3-dimensional threshold. Kappa values were compared for the three techniques to each observer, and improvements were seen with each addition to the original algorithm (Observer 1: 0.651, 0.653, 0.744; Observer 2: 0.603, 0.615, 0.699).

  14. A study on locating the sonic source of sinusoidal magneto-acoustic signals using a vector method.

    PubMed

    Zhang, Shunqi; Zhou, Xiaoqing; Ma, Ren; Yin, Tao; Liu, Zhipeng

    2015-01-01

    Methods based on the magnetic-acoustic effect are of great significance in studying the electrical imaging properties of biological tissues and currents. The continuous wave method, which is commonly used, can only detect the current amplitude without the sound source position. Although the pulse mode adopted in magneto-acoustic imaging can locate the sonic source, the low measuring accuracy and low SNR has limited its application. In this study, a vector method was used to solve and analyze the magnetic-acoustic signal based on the continuous sine wave mode. This study includes theory modeling of the vector method, simulations to the line model, and experiments with wire samples to analyze magneto-acoustic (MA) signal characteristics. The results showed that the amplitude and phase of the MA signal contained the location information of the sonic source. The amplitude and phase obeyed the vector theory in the complex plane. This study sets a foundation for a new technique to locate sonic sources for biomedical imaging of tissue conductivity. It also aids in studying biological current detecting and reconstruction based on the magneto-acoustic effect.

  15. Toward detection of marine vehicles on horizon from buoy camera

    NASA Astrophysics Data System (ADS)

    Fefilatyev, Sergiy; Goldgof, Dmitry B.; Langebrake, Lawrence

    2007-10-01

    This paper presents a new technique for automatic detection of marine vehicles in open sea from a buoy camera system using computer vision approach. Users of such system include border guards, military, port safety and flow management, sanctuary protection personnel. The system is intended to work autonomously, taking images of the surrounding ocean surface and analyzing them on the subject of presence of marine vehicles. The goal of the system is to detect an approximate window around the ship and prepare the small image for transmission and human evaluation. The proposed computer vision-based algorithm combines horizon detection method with edge detection and post-processing. The dataset of 100 images is used to evaluate the performance of proposed technique. We discuss promising results of ship detection and suggest necessary improvements for achieving better performance.

  16. Imaging model for the scintillator and its application to digital radiography image enhancement.

    PubMed

    Wang, Qian; Zhu, Yining; Li, Hongwei

    2015-12-28

    Digital Radiography (DR) images obtained by OCD-based (optical coupling detector) Micro-CT system usually suffer from low contrast. In this paper, a mathematical model is proposed to describe the image formation process in scintillator. By solving the correlative inverse problem, the quality of DR images is improved, i.e. higher contrast and spatial resolution. By analyzing the radiative transfer process of visible light in scintillator, scattering is recognized as the main factor leading to low contrast. Moreover, involved blurring effect is also concerned and described as point spread function (PSF). Based on these physical processes, the scintillator imaging model is then established. When solving the inverse problem, pre-correction to the intensity of x-rays, dark channel prior based haze removing technique, and an effective blind deblurring approach are employed. Experiments on a variety of DR images show that the proposed approach could improve the contrast of DR images dramatically as well as eliminate the blurring vision effectively. Compared with traditional contrast enhancement methods, such as CLAHE, our method could preserve the relative absorption values well.

  17. Probabilistic images (PBIS): A concise image representation technique for multiple parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, L.C.; Yeh, S.H.; Chen, Z.

    1984-01-01

    Based on m parametric images (PIs) derived from a dynamic series (DS), each pixel of DS is regarded as an m-dimensional vector. Given one set of normal samples (pixels) N and another of abnormal samples A, probability density functions (pdfs) of both sets are estimated. Any unknown sample is classified into N or A by calculating the probability of its being in the abnormal set using the Bayes' theorem. Instead of estimating the multivariate pdfs, a distance ratio transformation is introduced to map the m-dimensional sample space to one dimensional Euclidean space. Consequently, the image that localizes the regional abnormalitiesmore » is characterized by the probability of being abnormal. This leads to the new representation scheme of PBIs. Tc-99m HIDA study for detecting intrahepatic lithiasis (IL) was chosen as an example of constructing PBI from 3 parameters derived from DS and such a PBI was compared with those 3 PIs, namely, retention ratio image (RRI), peak time image (TNMAX) and excretion mean transit time image (EMTT). 32 normal subjects and 20 patients with proved IL were collected and analyzed. The resultant sensitivity and specificity of PBI were 97% and 98% respectively. They were superior to those of any of the 3 PIs: RRI (94/97), TMAX (86/88) and EMTT (94/97). Furthermore, the contrast of PBI was much better than that of any other image. This new image formation technique, based on multiple parameters, shows the functional abnormalities in a structural way. Its good contrast makes the interpretation easy. This technique is powerful compared to the existing parametric image method.« less

  18. Investigation of carbonates in the Sutter's Mill meteorite grains with hyperspectral infrared imaging micro-spectroscopy

    NASA Astrophysics Data System (ADS)

    Yesiltas, Mehmet

    2018-04-01

    Synchrotron-based high spatial resolution hyperspectral infrared imaging technique provides thousands of infrared spectra with high resolution, thus allowing us to acquire detailed spatial maps of chemical molecular structures for many grains in short times. Utilizing this technique, thousands of infrared spectra were analyzed at once instead of inspecting each spectrum separately. Sutter's Mill meteorite is a unique carbonaceous type meteorite with highly heterogeneous chemical composition. Multiple grains from the Sutter's Mill meteorite have been studied using this technique and the presence of both hydrous and anhydrous silicate minerals have been observed. It is observed that the carbonate mineralogy varies from simple to more complex carbonates even within a few microns in the meteorite grains. These variations, the type and distribution of calcite-like vs. dolomite-like carbonates are presented by means of hyperspectral FTIR imaging spectroscopy with high resolution. Various scenarios for the formation of different carbonate compositions in the Sutter's Mill parent body are discussed.

  19. Compressed sensing for rapid late gadolinium enhanced imaging of the left atrium: A preliminary study.

    PubMed

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Burgon, Nathan; Kholmovski, Eugene; Marrouche, Nassir; Adluru, Ganesh; DiBella, Edward

    2016-09-01

    Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5-15min to acquire an undersampled (R=1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~R=1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R~3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R~1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known. We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R~3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. [Wireless digital radiography detectors in the emergency area: an efficacious solution].

    PubMed

    Garrido Blázquez, M; Agulla Otero, M; Rodríguez Recio, F J; Torres Cabrera, R; Hernando González, I

    2013-01-01

    To evaluate the implementation of a flat panel digital radiolography (DR) system with WiFi technology in an emergency radiology area in which a computed radiography (CR) system was previously used. We analyzed aspects related to image quality, radiation dose, workflow, and ergonomics. We analyzed the results obtained with the CR and WiFi DR systems related with the quality of images analyzed in images obtained using a phantom and after radiologists' evaluation of radiological images obtained in real patients. We also analyzed the time required for image acquisition and the workflow with the two technological systems. Finally, we analyzed the data related to the dose of radiation in patients before and after the implementation of the new equipment. Image quality improved in both the tests carried out with a phantom and in radiological images obtained in patients, which increased from 3 to 4.5 on a 5-point scale. The average time required for image acquisition decreased by 25 seconds per image. The flat panel required less radiation to be delivered in practically all the techniques carried out using automatic dosimetry, although statistically significant differences were found in only some of the techniques (chest, thoracic spine, and lumbar spine). Implementing the WiFi DR system has brought benefits. Image quality has improved and the dose of radiation to patients has decreased. The new system also has advantages in terms of functionality, ergonomics, and performance. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.

  1. Comparing fusion techniques for the ImageCLEF 2013 medical case retrieval task.

    PubMed

    G Seco de Herrera, Alba; Schaer, Roger; Markonis, Dimitrios; Müller, Henning

    2015-01-01

    Retrieval systems can supply similar cases with a proven diagnosis to a new example case under observation to help clinicians during their work. The ImageCLEFmed evaluation campaign proposes a framework where research groups can compare case-based retrieval approaches. This paper focuses on the case-based task and adds results of the compound figure separation and modality classification tasks. Several fusion approaches are compared to identify the approaches best adapted to the heterogeneous data of the task. Fusion of visual and textual features is analyzed, demonstrating that the selection of the fusion strategy can improve the best performance on the case-based retrieval task. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Application of the multiple PRF technique to resolve Doppler centroid estimation ambiguity for spaceborne SAR

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Curlander, J. C.

    1992-01-01

    Estimation of the Doppler centroid ambiguity is a necessary element of the signal processing for SAR systems with large antenna pointing errors. Without proper resolution of the Doppler centroid estimation (DCE) ambiguity, the image quality will be degraded in the system impulse response function and the geometric fidelity. Two techniques for resolution of DCE ambiguity for the spaceborne SAR are presented; they include a brief review of the range cross-correlation technique and presentation of a new technique using multiple pulse repetition frequencies (PRFs). For SAR systems, where other performance factors control selection of the PRF's, an algorithm is devised to resolve the ambiguity that uses PRF's of arbitrary numerical values. The performance of this multiple PRF technique is analyzed based on a statistical error model. An example is presented that demonstrates for the Shuttle Imaging Radar-C (SIR-C) C-band SAR, the probability of correct ambiguity resolution is higher than 95 percent for antenna attitude errors as large as 3 deg.

  3. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  4. Three-dimensional image display system using stereogram and holographic optical memory techniques

    NASA Astrophysics Data System (ADS)

    Kim, Cheol S.; Kim, Jung G.; Shin, Chang-Mok; Kim, Soo-Joong

    2001-09-01

    In this paper, we implemented a three dimensional image display system using stereogram and holographic optical memory techniques which can store many images and reconstruct them automatically. In this system, to store and reconstruct stereo images, incident angle of reference beam must be controlled in real time, so we used BPH (binary phase hologram) and LCD (liquid crystal display) for controlling reference beam. And input images are represented on the LCD without polarizer/analyzer for maintaining uniform beam intensities regardless of the brightness of input images. The input images and BPHs are edited using application software with having the same recording scheduled time interval in storing. The reconstructed stereo images are acquired by capturing the output images with CCD camera at the behind of the analyzer which transforms phase information into brightness information of images. The reference beams are acquired by Fourier transform of BPH which designed with SA (simulated annealing) algorithm, and represented on the LCD with the 0.05 seconds time interval using application software for reconstructing the stereo images. In output plane, we used a LCD shutter that is synchronized to a monitor that displays alternate left and right eye images for depth perception. We demonstrated optical experiment which store and reconstruct four stereo images in BaTiO3 repeatedly using holographic optical memory techniques.

  5. Immunomagnetic cell separation, imaging, and analysis using Captivate ferrofluids

    NASA Astrophysics Data System (ADS)

    Jones, Laurie; Beechem, Joseph M.

    2002-05-01

    We have developed applications of CaptivateTM ferrofluids, paramagnetic particles (approximately 200 nm diameter), for isolating and analyzing cell populations in combination with fluorescence-based techniques. Using a microscope-mounted magnetic yoke and sample insertion chamber, fluorescent images of magnetically captured cells were obtained in culture media, buffer, or whole blood, while non-magnetically labeled cells sedimented to the bottom of the chamber. We combined this immunomagnetic cell separation and imaging technique with fluorescent staining, spectroscopy, and analysis to evaluate cell surface receptor-containing subpopulations, live/dead cell ratios, apoptotic/dead cell ratios, etc. The acquired images were analyzed using multi-color parameters, as produced by nucleic acid staining, esterase activity, or antibody labeling. In addition, the immunomagnetically separated cell fractions were assessed through microplate analysis using the CyQUANT Cell Proliferation Assay. These methods should provide an inexpensive alternative to some flow cytometric measurements. The binding capacities of the streptavidin- labled Captivate ferrofluid (SA-FF) particles were determined to be 8.8 nmol biotin/mg SA-FF, using biotin-4- fluorescein, and > 106 cells/mg SA-FF, using several cell types labeled with biotinylated probes. For goat anti- mouse IgG-labeled ferrofluids (GAM-FF), binding capacities were established to be approximately 0.2 - 7.5 nmol protein/mg GAM-FF using fluorescent conjugates of antibodies, protein G, and protein A.

  6. Non-destructive monitoring of creaming of oil-in-water emulsion-based formulations using magnetic resonance imaging.

    PubMed

    Onuki, Yoshinori; Horita, Akihiro; Kuribayashi, Hideto; Okuno, Yoshihide; Obata, Yasuko; Takayama, Kozo

    2014-07-01

    A non-destructive method for monitoring creaming of emulsion-based formulations is in great demand because it allows us to understand fully their instability mechanisms. This study was aimed at demonstrating the usefulness of magnetic resonance (MR) techniques, including MR imaging (MRI) and MR spectroscopy (MRS), for evaluating the physicochemical stability of emulsion-based formulations. Emulsions that are applicable as the base of practical skin creams were used as test samples. Substantial creaming was developed by centrifugation, which was then monitored by MRI. The creaming oil droplet layer and aqueous phase were clearly distinguished by quantitative MRI by measuring T1 and the apparent diffusion coefficient. Components in a selected volume in the emulsions could be analyzed using MRS. Then, model emulsions having different hydrophilic-lipophilic balance (HLB) values were tested, and the optimal HLB value for a stable dispersion was determined. In addition, the MRI examination enables the detection of creaming occurring in a polyethylene tube, which is commonly used for commercial products, without losing any image quality. These findings strongly indicate that MR techniques are powerful tools to evaluate the physicochemical stability of emulsion-based formulations. This study will make a great contribution to the development and quality control of emulsion-based formulations.

  7. Advances in molecular labeling, high throughput imaging and machine intelligence portend powerful functional cellular biochemistry tools.

    PubMed

    Price, Jeffrey H; Goodacre, Angela; Hahn, Klaus; Hodgson, Louis; Hunter, Edward A; Krajewski, Stanislaw; Murphy, Robert F; Rabinovich, Andrew; Reed, John C; Heynen, Susanne

    2002-01-01

    Cellular behavior is complex. Successfully understanding systems at ever-increasing complexity is fundamental to advances in modern science and unraveling the functional details of cellular behavior is no exception. We present a collection of prospectives to provide a glimpse of the techniques that will aid in collecting, managing and utilizing information on complex cellular processes via molecular imaging tools. These include: 1) visualizing intracellular protein activity with fluorescent markers, 2) high throughput (and automated) imaging of multilabeled cells in statistically significant numbers, and 3) machine intelligence to analyze subcellular image localization and pattern. Although not addressed here, the importance of combining cell-image-based information with detailed molecular structure and ligand-receptor binding models cannot be overlooked. Advanced molecular imaging techniques have the potential to impact cellular diagnostics for cancer screening, clinical correlations of tissue molecular patterns for cancer biology, and cellular molecular interactions for accelerating drug discovery. The goal of finally understanding all cellular components and behaviors will be achieved by advances in both instrumentation engineering (software and hardware) and molecular biochemistry. Copyright 2002 Wiley-Liss, Inc.

  8. Optical transmission testing based on asynchronous sampling techniques: images analysis containing chromatic dispersion using convolutional neural network

    NASA Astrophysics Data System (ADS)

    Mrozek, T.; Perlicki, K.; Tajmajer, T.; Wasilewski, P.

    2017-08-01

    The article presents an image analysis method, obtained from an asynchronous delay tap sampling (ADTS) technique, which is used for simultaneous monitoring of various impairments occurring in the physical layer of the optical network. The ADTS method enables the visualization of the optical signal in the form of characteristics (so called phase portraits) that change their shape under the influence of impairments such as chromatic dispersion, polarization mode dispersion and ASE noise. Using this method, a simulation model was built with OptSim 4.0. After the simulation study, data were obtained in the form of images that were further analyzed using the convolutional neural network algorithm. The main goal of the study was to train a convolutional neural network to recognize the selected impairment (distortion); then to test its accuracy and estimate the impairment for the selected set of test images. The input data consisted of processed binary images in the form of two-dimensional matrices, with the position of the pixel. This article focuses only on the analysis of images containing chromatic dispersion.

  9. Differential laser-induced perturbation spectroscopy and fluorescence imaging for biological and materials sensing

    NASA Astrophysics Data System (ADS)

    Burton, Dallas Jonathan

    The field of laser-based diagnostics has been a topic of research in various fields, more specifically for applications in environmental studies, military defense technologies, and medicine, among many others. In this dissertation, a novel laser-based optical diagnostic method, differential laser-induced perturbation spectroscopy (DLIPS), has been implemented in a spectroscopy mode and expanded into an imaging mode in combination with fluorescence techniques. The DLIPS method takes advantage of deep ultraviolet (UV) laser perturbation at sub-ablative energy fluences to photochemically cleave bonds and alter fluorescence signal response before and after perturbation. The resulting difference spectrum or differential image adds more information about the target specimen, and can be used in combination with traditional fluorescence techniques for detection of certain materials, characterization of many materials and biological specimen, and diagnosis of various human skin conditions. The differential aspect allows for mitigation of patient or sample variation, and has the potential to develop into a powerful, noninvasive optical sensing tool. The studies in this dissertation encompass efforts to continue the fundamental research on DLIPS including expansion of the method to an imaging mode. Five primary studies have been carried out and presented. These include the use of DLIPS in a spectroscopy mode for analysis of nitrogen-based explosives on various substrates, classification of Caribbean fruit flies versus Caribbean fruit flies that have been irradiated with gamma rays, and diagnosis of human skin cancer lesions. The nitrogen-based explosives and Caribbean fruit flies have been analyzed with the DLIPS scheme using the imaging modality, providing complementary information to the spectroscopic scheme. In each study, a comparison between absolute fluorescence signals and DLIPS responses showed that DLIPS statistically outperformed traditional fluorescence techniques with regards to regression error and classification.

  10. An array processing system for lunar geochemical and geophysical data

    NASA Technical Reports Server (NTRS)

    Eliason, E. M.; Soderblom, L. A.

    1977-01-01

    A computerized array processing system has been developed to reduce, analyze, display, and correlate a large number of orbital and earth-based geochemical, geophysical, and geological measurements of the moon on a global scale. The system supports the activities of a consortium of about 30 lunar scientists involved in data synthesis studies. The system was modeled after standard digital image-processing techniques but differs in that processing is performed with floating point precision rather than integer precision. Because of flexibility in floating-point image processing, a series of techniques that are impossible or cumbersome in conventional integer processing were developed to perform optimum interpolation and smoothing of data. Recently color maps of about 25 lunar geophysical and geochemical variables have been generated.

  11. Non-imaged based method for matching brains in a common anatomical space for cellular imagery.

    PubMed

    Midroit, Maëllie; Thevenet, Marc; Fournel, Arnaud; Sacquet, Joelle; Bensafi, Moustafa; Breton, Marine; Chalençon, Laura; Cavelius, Matthias; Didier, Anne; Mandairon, Nathalie

    2018-04-22

    Cellular imagery using histology sections is one of the most common techniques used in Neuroscience. However, this inescapable technique has severe limitations due to the need to delineate regions of interest on each brain, which is time consuming and variable across experimenters. We developed algorithms based on a vectors field elastic registration allowing fast, automatic realignment of experimental brain sections and associated labeling in a brain atlas with high accuracy and in a streamlined way. Thereby, brain areas of interest can be finely identified without outlining them and different experimental groups can be easily analyzed using conventional tools. This method directly readjusts labeling in the brain atlas without any intermediate manipulation of images. We mapped the expression of cFos, in the mouse brain (C57Bl/6J) after olfactory stimulation or a non-stimulated control condition and found an increased density of cFos-positive cells in the primary olfactory cortex but not in non-olfactory areas of the odor-stimulated animals compared to the controls. Existing methods of matching are based on image registration which often requires expensive material (two-photon tomography mapping or imaging with iDISCO) or are less accurate since they are based on mutual information contained in the images. Our new method is non-imaged based and relies only on the positions of detected labeling and the external contours of sections. We thus provide a new method that permits automated matching of histology sections of experimental brains with a brain reference atlas. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Detecting cells in time varying intensity images in confocal microscopy for gene expression studies in living cells

    NASA Astrophysics Data System (ADS)

    Mitra, Debasis; Boutchko, Rostyslav; Ray, Judhajeet; Nilsen-Hamilton, Marit

    2015-03-01

    In this work we present a time-lapsed confocal microscopy image analysis technique for an automated gene expression study of multiple single living cells. Fluorescence Resonance Energy Transfer (FRET) is a technology by which molecule-to-molecule interactions are visualized. We analyzed a dynamic series of ~102 images obtained using confocal microscopy of fluorescence in yeast cells containing RNA reporters that give a FRET signal when the gene promoter is activated. For each time frame, separate images are available for three spectral channels and the integrated intensity snapshot of the system. A large number of time-lapsed frames must be analyzed to identify each cell individually across time and space, as it is moving in and out of the focal plane of the microscope. This makes it a difficult image processing problem. We have proposed an algorithm here, based on scale-space technique, which solves the problem satisfactorily. The algorithm has multiple directions for even further improvement. The ability to rapidly measure changes in gene expression simultaneously in many cells in a population will open the opportunity for real-time studies of the heterogeneity of genetic response in a living cell population and the interactions between cells that occur in a mixed population, such as the ones found in the organs and tissues of multicellular organisms.

  13. Time-reversal imaging for classification of submerged elastic targets via Gibbs sampling and the Relevance Vector Machine.

    PubMed

    Dasgupta, Nilanjan; Carin, Lawrence

    2005-04-01

    Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.

  14. Quantum Cascade Laser-Based Infrared Microscopy for Label-Free and Automated Cancer Classification in Tissue Sections.

    PubMed

    Kuepper, Claus; Kallenbach-Thieltges, Angela; Juette, Hendrik; Tannapfel, Andrea; Großerueschkamp, Frederik; Gerwert, Klaus

    2018-05-16

    A feasibility study using a quantum cascade laser-based infrared microscope for the rapid and label-free classification of colorectal cancer tissues is presented. Infrared imaging is a reliable, robust, automated, and operator-independent tissue classification method that has been used for differential classification of tissue thin sections identifying tumorous regions. However, long acquisition time by the so far used FT-IR-based microscopes hampered the clinical translation of this technique. Here, the used quantum cascade laser-based microscope provides now infrared images for precise tissue classification within few minutes. We analyzed 110 patients with UICC-Stage II and III colorectal cancer, showing 96% sensitivity and 100% specificity of this label-free method as compared to histopathology, the gold standard in routine clinical diagnostics. The main hurdle for the clinical translation of IR-Imaging is overcome now by the short acquisition time for high quality diagnostic images, which is in the same time range as frozen sections by pathologists.

  15. Optical Linear Algebra for Computational Light Transport

    NASA Astrophysics Data System (ADS)

    O'Toole, Matthew

    Active illumination refers to optical techniques that use controllable lights and cameras to analyze the way light propagates through the world. These techniques confer many unique imaging capabilities (e.g. high-precision 3D scanning, image-based relighting, imaging through scattering media), but at a significant cost; they often require long acquisition and processing times, rely on predictive models for light transport, and cease to function when exposed to bright ambient sunlight. We develop a mathematical framework for describing and analyzing such imaging techniques. This framework is deeply rooted in numerical linear algebra, and models the transfer of radiant energy through an unknown environment with the so-called light transport matrix. Performing active illumination on a scene equates to applying a numerical operator on this unknown matrix. The brute-force approach to active illumination follows a two-step procedure: (1) optically measure the light transport matrix and (2) evaluate the matrix operator numerically. This approach is infeasible in general, because the light transport matrix is often much too large to measure, store, and analyze directly. Using principles from optical linear algebra, we evaluate these matrix operators in the optical domain, without ever measuring the light transport matrix in the first place. Specifically, we explore numerical algorithms that can be implemented partially or fully with programmable optics. These optical algorithms provide solutions to many longstanding problems in computer vision and graphics, including the ability to (1) photo-realistically change the illumination conditions of a given photo with only a handful of measurements, (2) accurately capture the 3D shape of objects in the presence of complex transport properties and strong ambient illumination, and (3) overcome the multipath interference problem associated with time-of-flight cameras. Most importantly, we introduce an all-new imaging regime---optical probing---that provides unprecedented control over which light paths contribute to a photo.

  16. FLIM data analysis of NADH and Tryptophan autofluorescence in prostate cancer cells

    NASA Astrophysics Data System (ADS)

    O'Melia, Meghan J.; Wallrabe, Horst; Svindrych, Zdenek; Rehman, Shagufta; Periasamy, Ammasi

    2016-03-01

    Fluorescence lifetime imaging microscopy (FLIM) is one of the most sensitive techniques to measure metabolic activity in living cells, tissues and whole animals. We used two- and three-photon fluorescence excitation together with time-correlated single photon counting (TCSPC) to acquire FLIM signals from normal and prostate cancer cell lines. FLIM requires complex data fitting and analysis; we explored different ways to analyze the data to match diverse cellular morphologies. After non-linear least square fitting of the multi-photon TCSPC images by the SPCImage software (Becker & Hickl), all image data are exported and further processed in ImageJ. Photon images provide morphological, NAD(P)H signal-based autofluorescent features, for which regions of interest (ROIs) are created. Applying these ROIs to all image data parameters with a custom ImageJ macro, generates a discrete, ROI specific database. A custom Excel (Microsoft) macro further analyzes the data with charts and statistics. Applying this highly automated assay we compared normal and cancer prostate cell lines with respect to their glycolytic activity by analyzing the NAD(P)H-bound fraction (a2%), NADPH/NADH ratio and efficiency of energy transfer (E%) for Tryptophan (Trp). Our results show that this assay is able to differentiate the effects of glucose stimulation and Doxorubicin in these prostate cell lines by tracking the changes in a2% of NAD(P)H, NADPH/NADH ratio and the changes in Trp E%. The ability to isolate a large, ROI-based data set, reflecting the heterogeneous cellular environment and highlighting even subtle changes -- rather than whole cell averages - makes this assay particularly valuable.

  17. Spectrum image analysis tool - A flexible MATLAB solution to analyze EEL and CL spectrum images.

    PubMed

    Schmidt, Franz-Philipp; Hofer, Ferdinand; Krenn, Joachim R

    2017-02-01

    Spectrum imaging techniques, gaining simultaneously structural (image) and spectroscopic data, require appropriate and careful processing to extract information of the dataset. In this article we introduce a MATLAB based software that uses three dimensional data (EEL/CL spectrum image in dm3 format (Gatan Inc.'s DigitalMicrograph ® )) as input. A graphical user interface enables a fast and easy mapping of spectral dependent images and position dependent spectra. First, data processing such as background subtraction, deconvolution and denoising, second, multiple display options including an EEL/CL moviemaker and, third, the applicability on a large amount of data sets with a small work load makes this program an interesting tool to visualize otherwise hidden details. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Pc-Based Floating Point Imaging Workstation

    NASA Astrophysics Data System (ADS)

    Guzak, Chris J.; Pier, Richard M.; Chinn, Patty; Kim, Yongmin

    1989-07-01

    The medical, military, scientific and industrial communities have come to rely on imaging and computer graphics for solutions to many types of problems. Systems based on imaging technology are used to acquire and process images, and analyze and extract data from images that would otherwise be of little use. Images can be transformed and enhanced to reveal detail and meaning that would go undetected without imaging techniques. The success of imaging has increased the demand for faster and less expensive imaging systems and as these systems become available, more and more applications are discovered and more demands are made. From the designer's perspective the challenge to meet these demands forces him to attack the problem of imaging from a different perspective. The computing demands of imaging algorithms must be balanced against the desire for affordability and flexibility. Systems must be flexible and easy to use, ready for current applications but at the same time anticipating new, unthought of uses. Here at the University of Washington Image Processing Systems Lab (IPSL) we are focusing our attention on imaging and graphics systems that implement imaging algorithms for use in an interactive environment. We have developed a PC-based imaging workstation with the goal to provide powerful and flexible, floating point processing capabilities, along with graphics functions in an affordable package suitable for diverse environments and many applications.

  19. Digital image analysis techniques for fiber and soil mixtures.

    DOT National Transportation Integrated Search

    1999-05-01

    The objective of image processing is to visually enhance, quantify, and/or statistically evaluate some aspect of an image not readily apparent in its original form. Processed digital image data can be analyzed in numerous ways. In order to summarize ...

  20. Analysis on the 3D crosstalk in stereoscopic display

    NASA Astrophysics Data System (ADS)

    Choi, Hee-Jin

    2010-11-01

    Nowadays, with the rapid progresses in flat panel display (FPD) technologies, the three-dimensional (3D) display is now becoming a next mainstream of display market. Among the various 3D display techniques, the stereoscopic 3D display shows different left/right images for each eye of observer using special glasses and is the most popular 3D technique with the advantages of low price and high 3D resolution. However, current stereoscopic 3D displays suffer with the 3D crosstalk which means the interference between the left eye mage and right eye images since it degrades the quality of 3D image severely. In this paper, the meaning and causes of the 3D crosstalk in stereoscopic 3D display are introduced and the pre-proposed methods of 3D crosstalk measurement vision science are reviewed. Based on them The threshold of 3D crosstalk to realize a 3D display with no degradation is analyzed.

  1. An Integrated Centroid Finding and Particle Overlap Decomposition Algorithm for Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.

  2. The Generation of Novel MR Imaging Techniques to Visualize Inflammatory/Degenerative Mechanisms and the Correlation of MR Data with 3D Microscopic Changes

    DTIC Science & Technology

    2012-09-01

    structures that are impossible with current methods . Using techniques to concurrently stain and three-dimensionally analyze many cell types and...new methods allowed us to visualize structures in these damaged samples that were not visible using conventional techniques allowing us modify our...AWARD NUMBER: W81XWH-11-1-0705 TITLE: The Generation of Novel MR Imaging Techniques to

  3. Prioritizing Scientific Data for Transmission

    NASA Technical Reports Server (NTRS)

    Castano, Rebecca; Anderson, Robert; Estlin, Tara; DeCoste, Dennis; Gaines, Daniel; Mazzoni, Dominic; Fisher, Forest; Judd, Michele

    2004-01-01

    A software system has been developed for prioritizing newly acquired geological data onboard a planetary rover. The system has been designed to enable efficient use of limited communication resources by transmitting the data likely to have the most scientific value. This software operates onboard a rover by analyzing collected data, identifying potential scientific targets, and then using that information to prioritize data for transmission to Earth. Currently, the system is focused on the analysis of acquired images, although the general techniques are applicable to a wide range of data modalities. Image prioritization is performed using two main steps. In the first step, the software detects features of interest from each image. In its current application, the system is focused on visual properties of rocks. Thus, rocks are located in each image and rock properties, such as shape, texture, and albedo, are extracted from the identified rocks. In the second step, the features extracted from a group of images are used to prioritize the images using three different methods: (1) identification of key target signature (finding specific rock features the scientist has identified as important), (2) novelty detection (finding rocks we haven t seen before), and (3) representative rock sampling (finding the most average sample of each rock type). These methods use techniques such as K-means unsupervised clustering and a discrimination-based kernel classifier to rank images based on their interest level.

  4. Determination of fiber volume in graphite/epoxy materials using computer image analysis

    NASA Technical Reports Server (NTRS)

    Viens, Michael J.

    1990-01-01

    The fiber volume of graphite/epoxy specimens was determined by analyzing optical images of cross sectioned specimens using image analysis software. Test specimens were mounted and polished using standard metallographic techniques and examined at 1000 times magnification. Fiber volume determined using the optical imaging agreed well with values determined using the standard acid digestion technique. The results were found to agree within 5 percent over a fiber volume range of 45 to 70 percent. The error observed is believed to arise from fiber volume variations within the graphite/epoxy panels themselves. The determination of ply orientation using image analysis techniques is also addressed.

  5. A common-path phase-shift interferometry surface plasmon imaging system

    NASA Astrophysics Data System (ADS)

    Su, Y.-T.; Chen, Shean-Jen; Yeh, T.-L.

    2005-03-01

    A biosensing imaging system is proposed based on the integration of surface plasmon resonance (SPR) and common-path phase-shift interferometry (PSI) techniques to measure the two-dimensional spatial phase variation caused by biomolecular interactions upon a sensing chip. The SPR phase imaging system can offer high resolution and high-throughout screening capabilities to analyze microarray biomolecular interaction without the need for additional labeling. With the long-term stability advantage of the common-path PSI technique even with external disturbances such as mechanical vibration, buffer flow noise, and laser unstable issue, the system can match the demand of real-time kinetic study for biomolecular interaction analysis (BIA). The SPR-PSI imaging system has achieved a detection limit of 2×10-7 refraction index change, a long-term phase stability of 2.5x10-4π rms over four hours, and a spatial phase resolution of 10-3 π with a lateral resolution of 100μm.

  6. Automated Detection of Microaneurysms Using Scale-Adapted Blob Analysis and Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adal, Kedir M.; Sidebe, Desire; Ali, Sharib

    2014-01-07

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are then introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier to detect true MAs. The developed system is built using onlymore » few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images.« less

  7. Application of principal component analysis for improvement of X-ray fluorescence images obtained by polycapillary-based micro-XRF technique

    NASA Astrophysics Data System (ADS)

    Aida, S.; Matsuno, T.; Hasegawa, T.; Tsuji, K.

    2017-07-01

    Micro X-ray fluorescence (micro-XRF) analysis is repeated as a means of producing elemental maps. In some cases, however, the XRF images of trace elements that are obtained are not clear due to high background intensity. To solve this problem, we applied principal component analysis (PCA) to XRF spectra. We focused on improving the quality of XRF images by applying PCA. XRF images of the dried residue of standard solution on the glass substrate were taken. The XRF intensities for the dried residue were analyzed before and after PCA. Standard deviations of XRF intensities in the PCA-filtered images were improved, leading to clear contrast of the images. This improvement of the XRF images was effective in cases where the XRF intensity was weak.

  8. Computer-aided assessment of pulmonary disease in novel swine-origin H1N1 influenza on CT

    NASA Astrophysics Data System (ADS)

    Yao, Jianhua; Dwyer, Andrew J.; Summers, Ronald M.; Mollura, Daniel J.

    2011-03-01

    The 2009 pandemic is a global outbreak of novel H1N1 influenza. Radiologic images can be used to assess the presence and severity of pulmonary infection. We develop a computer-aided assessment system to analyze the CT images from Swine-Origin Influenza A virus (S-OIV) novel H1N1 cases. The technique is based on the analysis of lung texture patterns and classification using a support vector machine (SVM). Pixel-wise tissue classification is computed from the SVM value. The method was validated on four H1N1 cases and ten normal cases. We demonstrated that the technique can detect regions of pulmonary abnormality in novel H1N1 patients and differentiate these regions from visually normal lung (area under the ROC curve is 0.993). This technique can also be applied to differentiate regions infected by different pulmonary diseases.

  9. A novel quantum LSB-based steganography method using the Gray code for colored quantum images

    NASA Astrophysics Data System (ADS)

    Heidari, Shahrokh; Farzadnia, Ehsan

    2017-10-01

    As one of the prevalent data-hiding techniques, steganography is defined as the act of concealing secret information in a cover multimedia encompassing text, image, video and audio, imperceptibly, in order to perform interaction between the sender and the receiver in which nobody except the receiver can figure out the secret data. In this approach a quantum LSB-based steganography method utilizing the Gray code for quantum RGB images is investigated. This method uses the Gray code to accommodate two secret qubits in 3 LSBs of each pixel simultaneously according to reference tables. Experimental consequences which are analyzed in MATLAB environment, exhibit that the present schema shows good performance and also it is more secure and applicable than the previous one currently found in the literature.

  10. Image quality stability of whole-body diffusion weighted imaging.

    PubMed

    Chen, Yun-bin; Hu, Chun-miao; Zhong, Jing; Sun, Fei

    2009-06-01

    To assess the reproducibility of whole-body diffusion weighted imaging (WB-DWI) technique in healthy volunteers under normal breathing with background body signal suppression. WB-DWI was performed on 32 healthy volunteers twice within two-week period using short TI inversion-recovery diffusion-weighted echo-planar imaging sequence and built-in body coil. The volunteers were scanned across six stations continuously covering the entire body from the head to the feet under normal breathing. The bone apparent diffusion coefficient (ADC) and exponential ADC (eADC) of regions of interest (ROIs) were measured. We analyzed correlation of the results using paired-t-test to assess the reproducibility of the WB-DWI technique. We were successful in collecting and analyzing data of 64 WB-DWI images. There was no significant difference in bone ADC and eADC of 824 ROIs between the paired observers and paired scans (P>0.05). Most of the images from all stations were of diagnostic quality. The measurements of bone ADC and eADC have good reproducibility. WB-DWI technique under normal breathing with background body signal suppression is adequate.

  11. Thermographic Imaging of the Space Shuttle During Re-Entry Using a Near Infrared Sensor

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Horvath, Thomas J.; Kerns, Robbie V.; Burke, Eric R.; Taylor, Jeff C.; Spisz, Tom; Gibson, David M.; Shea, Edward J.; Mercer, C. David; Schwartz, Richard J.; hide

    2012-01-01

    High resolution calibrated near infrared (NIR) imagery of the Space Shuttle Orbiter was obtained during hypervelocity atmospheric re-entry of the STS-119, STS-125, STS-128, STS-131, STS-132, STS-133, and STS-134 missions. This data has provided information on the distribution of surface temperature and the state of the airflow over the windward surface of the Orbiter during descent. The thermal imagery complemented data collected with onboard surface thermocouple instrumentation. The spatially resolved global thermal measurements made during the Orbiter s hypersonic re-entry will provide critical flight data for reducing the uncertainty associated with present day ground-to-flight extrapolation techniques and current state-of-the-art empirical boundary-layer transition or turbulent heating prediction methods. Laminar and turbulent flight data is critical for the validation of physics-based, semi-empirical boundary-layer transition prediction methods as well as stimulating the validation of laminar numerical chemistry models and the development of turbulence models supporting NASA s next-generation spacecraft. In this paper we provide details of the NIR imaging system used on both air and land-based imaging assets. The paper will discuss calibrations performed on the NIR imaging systems that permitted conversion of captured radiant intensity (counts) to temperature values. Image processing techniques are presented to analyze the NIR data for vignetting distortion, best resolution, and image sharpness. Keywords: HYTHIRM, Space Shuttle thermography, hypersonic imaging, near infrared imaging, histogram analysis, singular value decomposition, eigenvalue image sharpness

  12. Image-Based Predictive Modeling of Heart Mechanics.

    PubMed

    Wang, V Y; Nielsen, P M F; Nash, M P

    2015-01-01

    Personalized biophysical modeling of the heart is a useful approach for noninvasively analyzing and predicting in vivo cardiac mechanics. Three main developments support this style of analysis: state-of-the-art cardiac imaging technologies, modern computational infrastructure, and advanced mathematical modeling techniques. In vivo measurements of cardiac structure and function can be integrated using sophisticated computational methods to investigate mechanisms of myocardial function and dysfunction, and can aid in clinical diagnosis and developing personalized treatment. In this article, we review the state-of-the-art in cardiac imaging modalities, model-based interpretation of 3D images of cardiac structure and function, and recent advances in modeling that allow personalized predictions of heart mechanics. We discuss how using such image-based modeling frameworks can increase the understanding of the fundamental biophysics behind cardiac mechanics, and assist with diagnosis, surgical guidance, and treatment planning. Addressing the challenges in this field will require a coordinated effort from both the clinical-imaging and modeling communities. We also discuss future directions that can be taken to bridge the gap between basic science and clinical translation.

  13. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    NASA Astrophysics Data System (ADS)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  14. On techniques for angle compensation in nonideal iris recognition.

    PubMed

    Schuckers, Stephanie A C; Schmid, Natalia A; Abhyankar, Aditya; Dorairaj, Vivekanand; Boyce, Christopher K; Hornak, Lawrence A

    2007-10-01

    The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word "nonideal" is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.

  15. Layover and shadow detection based on distributed spaceborne single-baseline InSAR

    NASA Astrophysics Data System (ADS)

    Huanxin, Zou; Bin, Cai; Changzhou, Fan; Yun, Ren

    2014-03-01

    Distributed spaceborne single-baseline InSAR is an effective technique to get high quality Digital Elevation Model. Layover and Shadow are ubiquitous phenomenon in SAR images because of geometric relation of SAR imaging. In the signal processing of single-baseline InSAR, the phase singularity of Layover and Shadow leads to the phase difficult to filtering and unwrapping. This paper analyzed the geometric and signal model of the Layover and Shadow fields. Based on the interferometric signal autocorrelation matrix, the paper proposed the signal number estimation method based on information theoretic criteria, to distinguish Layover and Shadow from normal InSAR fields. The effectiveness and practicability of the method proposed in the paper are validated in the simulation experiments and theoretical analysis.

  16. Second-harmonic patterned polarization-analyzed reflection confocal microscope

    NASA Astrophysics Data System (ADS)

    Okoro, Chukwuemeka; Toussaint, Kimani C.

    2017-08-01

    We introduce the second-harmonic patterned polarization-analyzed reflection confocal (SPPARC) microscope-a multimodal imaging platform that integrates Mueller matrix polarimetry with reflection confocal and second-harmonic generation (SHG) microscopy. SPPARC microscopy provides label-free three-dimensional (3-D), SHG-patterned confocal images that lend themselves to spatially dependent, linear polarimetric analysis for extraction of rich polarization information based on the Mueller calculus. To demonstrate its capabilities, we use SPPARC microscopy to analyze both porcine tendon and ligament samples and find differences in both circular degree-of-polarization and depolarization parameters. Moreover, using the collagen-generated SHG signal as an endogenous counterstain, we show that the technique can be used to provide 3-D polarimetric information of the surrounding extrafibrillar matrix plus cells or EFMC region. The unique characteristics of SPPARC microscopy holds strong potential for it to more accurately and quantitatively describe microstructural changes in collagen-rich samples in three spatial dimensions.

  17. Fetal cerebral imaging - ultrasound vs. MRI: an update.

    PubMed

    Blondiaux, Eléonore; Garel, Catherine

    2013-11-01

    The purpose of this article is to analyze the advantages and limitations of prenatal ultrasonography (US) and magnetic resonance imaging (MRI) in the evaluation of the fetal brain. These imaging modalities should not be seen as competitive but rather as complementary. There are wide variations in the world regarding screening policies, technology, skills, and legislation about termination of pregnancy, and these variations markedly impact on the way of using prenatal imaging. According to the contribution expected from each technique and to local working conditions, one should choose the most appropriate imaging modality on a case-by-case basis. The advantages and limitations of US and MRI in the setting of fetal brain imaging are displayed. Different anatomical regions (midline, ventricles, subependymal area, cerebral parenchyma, pericerebral space, posterior fossa) and pathological conditions are analyzed and illustrated in order to compare the respective contribution of each technique. An accurate prenatal diagnosis of cerebral abnormalities is of utmost importance for prenatal counseling.

  18. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  19. Hand motion modeling for psychology analysis in job interview using optical flow-history motion image: OF-HMI

    NASA Astrophysics Data System (ADS)

    Khalifa, Intissar; Ejbali, Ridha; Zaied, Mourad

    2018-04-01

    To survive the competition, companies always think about having the best employees. The selection is depended on the answers to the questions of the interviewer and the behavior of the candidate during the interview session. The study of this behavior is always based on a psychological analysis of the movements accompanying the answers and discussions. Few techniques are proposed until today to analyze automatically candidate's non verbal behavior. This paper is a part of a work psychology recognition system; it concentrates in spontaneous hand gesture which is very significant in interviews according to psychologists. We propose motion history representation of hand based on an hybrid approach that merges optical flow and history motion images. The optical flow technique is used firstly to detect hand motions in each frame of a video sequence. Secondly, we use the history motion images (HMI) to accumulate the output of the optical flow in order to have finally a good representation of the hand`s local movement in a global temporal template.

  20. True Ortho Generation of Urban Area Using High Resolution Aerial Photos

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Stanley, David; Xin, Yubin

    2016-06-01

    The pros and cons of existing methods for true ortho generation are analyzed based on a critical literature review for its two major processing stages: visibility analysis and occlusion compensation. They process frame and pushbroom images using different algorithms for visibility analysis due to the need of perspective centers used by the z-buffer (or alike) techniques. For occlusion compensation, the pixel-based approach likely results in excessive seamlines in the ortho-rectified images due to the use of a quality measure on the pixel-by-pixel rating basis. In this paper, we proposed innovative solutions to tackle the aforementioned problems. For visibility analysis, an elevation buffer technique is introduced to employ the plain elevations instead of the distances from perspective centers by z-buffer, and has the advantage of sensor independency. A segment oriented strategy is developed to evaluate a plain cost measure per segment for occlusion compensation instead of the tedious quality rating per pixel. The cost measure directly evaluates the imaging geometry characteristics in ground space, and is also sensor independent. Experimental results are demonstrated using aerial photos acquired by UltraCam camera.

  1. Performance limitations of temperature-emissivity separation techniques in long-wave infrared hyperspectral imaging applications

    NASA Astrophysics Data System (ADS)

    Pieper, Michael; Manolakis, Dimitris; Truslow, Eric; Cooley, Thomas; Brueggeman, Michael; Jacobson, John; Weisner, Andrew

    2017-08-01

    Accurate estimation or retrieval of surface emissivity from long-wave infrared or thermal infrared (TIR) hyperspectral imaging data acquired by airborne or spaceborne sensors is necessary for many scientific and defense applications. This process consists of two interwoven steps: atmospheric compensation and temperature-emissivity separation (TES). The most widely used TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the atmospheric transmission function. We develop a model to explain and evaluate the performance of TES algorithms using a smoothing approach. Based on this model, we identify three sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise. For each TES smoothing technique, we analyze the bias and variability of the temperature errors, which translate to emissivity errors. The performance model explains how the errors interact to generate temperature errors. Since we assume exact knowledge of the atmosphere, the presented results provide an upper bound on the performance of TES algorithms based on the smoothness assumption.

  2. Copyright protection of remote sensing imagery by means of digital watermarking

    NASA Astrophysics Data System (ADS)

    Barni, Mauro; Bartolini, Franco; Cappellini, Vito; Magli, Enrico; Olmo, Gabriella; Zanini, R.

    2001-12-01

    The demand for remote sensing data has increased dramatically mainly due to the large number of possible applications capable to exploit remotely sensed data and images. As in many other fields, along with the increase of market potential and product diffusion, the need arises for some sort of protection of the image products from unauthorized use. Such a need is a very crucial one even because the Internet and other public/private networks have become preferred and effective means of data exchange. An important issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. Before applying watermarking techniques developed for multimedia applications to remote sensing applications, it is important that the requirements imposed by remote sensing imagery are carefully analyzed to investigate whether they are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: (1) assessment of the requirements imposed by the characteristics of remotely sensed images on watermark-based copyright protection; (2) discussion of a case study where the performance of two popular, state-of-the-art watermarking techniques are evaluated by the light of the requirements at the previous point.

  3. Combining deep learning and coherent anti-Stokes Raman scattering imaging for automated differential diagnosis of lung cancer

    NASA Astrophysics Data System (ADS)

    Weng, Sheng; Xu, Xiaoyun; Li, Jiasong; Wong, Stephen T. C.

    2017-10-01

    Lung cancer is the most prevalent type of cancer and the leading cause of cancer-related deaths worldwide. Coherent anti-Stokes Raman scattering (CARS) is capable of providing cellular-level images and resolving pathologically related features on human lung tissues. However, conventional means of analyzing CARS images requires extensive image processing, feature engineering, and human intervention. This study demonstrates the feasibility of applying a deep learning algorithm to automatically differentiate normal and cancerous lung tissue images acquired by CARS. We leverage the features learned by pretrained deep neural networks and retrain the model using CARS images as the input. We achieve 89.2% accuracy in classifying normal, small-cell carcinoma, adenocarcinoma, and squamous cell carcinoma lung images. This computational method is a step toward on-the-spot diagnosis of lung cancer and can be further strengthened by the efforts aimed at miniaturizing the CARS technique for fiber-based microendoscopic imaging.

  4. A comparative quantitative analysis of the IDEAL (iterative decomposition of water and fat with echo asymmetry and least-squares estimation) and the CHESS (chemical shift selection suppression) techniques in 3.0 T L-spine MRI

    NASA Astrophysics Data System (ADS)

    Kim, Eng-Chan; Cho, Jae-Hwan; Kim, Min-Hye; Kim, Ki-Hong; Choi, Cheon-Woong; Seok, Jong-min; Na, Kil-Ju; Han, Man-Seok

    2013-03-01

    This study was conducted on 20 patients who had undergone pedicle screw fixation between March and December 2010 to quantitatively compare a conventional fat suppression technique, CHESS (chemical shift selection suppression), and a new technique, IDEAL (iterative decomposition of water and fat with echo asymmetry and least squares estimation). The general efficacy and usefulness of the IDEAL technique was also evaluated. Fat-suppressed transverse-relaxation-weighed images and longitudinal-relaxation-weighted images were obtained before and after contrast injection by using these two techniques with a 1.5T MR (magnetic resonance) scanner. The obtained images were analyzed for image distortion, susceptibility artifacts and homogenous fat removal in the target region. The results showed that the image distortion due to the susceptibility artifacts caused by implanted metal was lower in the images obtained using the IDEAL technique compared to those obtained using the CHESS technique. The results of a qualitative analysis also showed that compared to the CHESS technique, fewer susceptibility artifacts and more homogenous fat removal were found in the images obtained using the IDEAL technique in a comparative image evaluation of the axial plane images before and after contrast injection. In summary, compared to the CHESS technique, the IDEAL technique showed a lower occurrence of susceptibility artifacts caused by metal and lower image distortion. In addition, more homogenous fat removal was shown in the IDEAL technique.

  5. A Reproducible Computerized Method for Quantitation of Capillary Density using Nailfold Capillaroscopy.

    PubMed

    Cheng, Cynthia; Lee, Chadd W; Daskalakis, Constantine

    2015-10-27

    Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient's microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.(1) This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique.

  6. A Reproducible Computerized Method for Quantitation of Capillary Density using Nailfold Capillaroscopy

    PubMed Central

    Daskalakis, Constantine

    2015-01-01

    Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient’s microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.1 This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique. PMID:26554744

  7. Calibration and validation of projection lithography in chemically amplified resist systems using fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Mason, Michael D.; Ray, Krishanu; Feke, Gilbert D.; Grober, Robert D.; Pohlers, Gerd; Cameron, James F.

    2003-05-01

    Coumarin 6 (C6), a pH sensitive fluorescent molecule were doped into commercial resist systems to demonstrate a cost-effective fluorescence microscopy technique for detecting latent photoacid images in exposed chemically amplified resist films. The fluorescenec image contrast is optimized by carefully selecting optical filters to match the spectroscopic properties of C6 in the resist matrices. We demonstrate the potential of this technique for two sepcific non-invasive applications. First, a fast, conventient, fluorescence technique is demonstrated for determination of quantum yeidsl of photo-acid generation. Since the Ka of C6 in the 193nm resist system lies wihtin the range of acid concentrations that can be photogenerated, we have used this technique to evaluate the acid generation efficiency of various photo-acid generators (PAGs). The technique is based on doping the resist formulations containing the candidate PAGs with C6, coating one wafer per PAG, patterning the wafer with a dose ramp and spectroscopically imaging the wafers. The fluorescence of each pattern in the dose ramp is measured as a single image and analyzed with the optical titration model. Second, a nondestructive in-line diagnostic technique is developed for the focus calibration and validation of a projection lithography system. Our experimental results show excellent correlation between the fluorescence images and scanning electron microscope analysis of developed features. This technique has successfully been applied in both deep UV resists e.g., Shipley UVIIHS resist and 193 nm resists e.g., Shipley Vema-type resist. This method of focus calibration has also been extended to samples with feature sizes below the diffraction limit where the pitch between adjacent features is on the order of 300 nm. Image capture, data analysis, and focus latitude verification are all computer controlled from a single hardware/software platform. Typical focus calibration curves can be obtained within several minutes.

  8. Shadow-free single-pixel imaging

    NASA Astrophysics Data System (ADS)

    Li, Shunhua; Zhang, Zibang; Ma, Xiao; Zhong, Jingang

    2017-11-01

    Single-pixel imaging is an innovative imaging scheme and receives increasing attention in recent years, for it is applicable for imaging at non-visible wavelengths and imaging under weak light conditions. However, as in conventional imaging, shadows would likely occur in single-pixel imaging and sometimes bring negative effects in practical uses. In this paper, the principle of shadows occurrence in single-pixel imaging is analyzed, following which a technique for shadows removal is proposed. In the proposed technique, several single-pixel detectors are used to detect the backscattered light at different locations so that the shadows in the reconstructed images corresponding to each detector shadows are complementary. Shadow-free reconstruction can be derived by fusing the shadow-complementary images using maximum selection rule. To deal with the problem of intensity mismatch in image fusion, we put forward a simple calibration. As experimentally demonstrated, the technique is able to reconstruct monochromatic and full-color shadow-free images.

  9. Classification of rice grain varieties arranged in scattered and heap fashion using image processing

    NASA Astrophysics Data System (ADS)

    Bhat, Sudhanva; Panat, Sreedath; N, Arunachalam

    2017-03-01

    Inspection and classification of food grains is a manual process in many of the food grain processing industries. Automation of such a process is going to be beneficial for industries facing shortage of skilled workforce. Machine Vision techniques are some of the popular approaches for developing such automations. Most of the existing works on the topic deal with identification of the rice variety by analyzing images of well separated and isolated rice grains from which a lot of geometrical features can be extracted. This paper proposes techniques to estimate geometrical parameters from the images of scattered as well as heaped rice grains where the grain boundaries are not clearly identifiable. A methodology based on convexity is proposed to separate touching rice grains in the scattered rice grain images and get their geometrical parameters. And in case of heaped arrangement a Pixel-Distance Contribution Function is defined and is used to get points inside rice grains and then to find the boundary points of rice grains. These points are fit with the equation of an ellipse to estimate their lengths and breadths. The proposed techniques are applied on images of scattered and heaped rice grains of different varieties. It is shown that each variety gives a unique set of results.

  10. Shear wave elastography using Wigner-Ville distribution: a simulated multilayer media study.

    PubMed

    Bidari, Pooya Sobhe; Alirezaie, Javad; Tavakkoli, Jahan

    2016-08-01

    Shear Wave Elastography (SWE) is a quantitative ultrasound-based imaging modality for distinguishing normal and abnormal tissue types by estimating the local viscoelastic properties of the tissue. These properties have been estimated in many studies by propagating ultrasound shear wave within the tissue and estimating parameters such as speed of wave. Vast majority of the proposed techniques are based on the cross-correlation of consecutive ultrasound images. In this study, we propose a new method of wave detection based on time-frequency (TF) analysis of the ultrasound signal. The proposed method is a modified version of the Wigner-Ville Distribution (WVD) technique. The TF components of the wave are detected in a propagating ultrasound wave within a simulated multilayer tissue and the local properties are estimated based on the detected waves. Image processing techniques such as Alternative Sequential Filters (ASF) and Circular Hough Transform (CHT) have been utilized to improve the estimation of TF components. This method has been applied to a simulated data from Wave3000™ software (CyberLogic Inc., New York, NY). This data simulates the propagation of an acoustic radiation force impulse within a two-layer tissue with slightly different viscoelastic properties between the layers. By analyzing the local TF components of the wave, we estimate the longitudinal and shear elasticities and viscosities of the media. This work shows that our proposed method is capable of distinguishing between different layers of a tissue.

  11. How Digital Image Processing Became Really Easy

    NASA Astrophysics Data System (ADS)

    Cannon, Michael

    1988-02-01

    In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.

  12. Application of wavelet techniques for cancer diagnosis using ultrasound images: A Review.

    PubMed

    Sudarshan, Vidya K; Mookiah, Muthu Rama Krishnan; Acharya, U Rajendra; Chandran, Vinod; Molinari, Filippo; Fujita, Hamido; Ng, Kwan Hoong

    2016-02-01

    Ultrasound is an important and low cost imaging modality used to study the internal organs of human body and blood flow through blood vessels. It uses high frequency sound waves to acquire images of internal organs. It is used to screen normal, benign and malignant tissues of various organs. Healthy and malignant tissues generate different echoes for ultrasound. Hence, it provides useful information about the potential tumor tissues that can be analyzed for diagnostic purposes before therapeutic procedures. Ultrasound images are affected with speckle noise due to an air gap between the transducer probe and the body. The challenge is to design and develop robust image preprocessing, segmentation and feature extraction algorithms to locate the tumor region and to extract subtle information from isolated tumor region for diagnosis. This information can be revealed using a scale space technique such as the Discrete Wavelet Transform (DWT). It decomposes an image into images at different scales using low pass and high pass filters. These filters help to identify the detail or sudden changes in intensity in the image. These changes are reflected in the wavelet coefficients. Various texture, statistical and image based features can be extracted from these coefficients. The extracted features are subjected to statistical analysis to identify the significant features to discriminate normal and malignant ultrasound images using supervised classifiers. This paper presents a review of wavelet techniques used for preprocessing, segmentation and feature extraction of breast, thyroid, ovarian and prostate cancer using ultrasound images. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Super-resolution imaging applied to moving object tracking

    NASA Astrophysics Data System (ADS)

    Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi

    2017-10-01

    Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.

  14. Use of satellite images in the evaluation of farmlands. [in Mexico

    NASA Technical Reports Server (NTRS)

    Lozano H., A. E.

    1978-01-01

    Remote sensing techniques in the evaluation of farmland in Mexico are discussed. Electronic analysis techniques and photointerpretation techniques are analyzed. Characteristics of the basic crops in Mexico as related to remote sensing are described.

  15. Web image retrieval using an effective topic and content-based technique

    NASA Astrophysics Data System (ADS)

    Lee, Ching-Cheng; Prabhakara, Rashmi

    2005-03-01

    There has been an exponential growth in the amount of image data that is available on the World Wide Web since the early development of Internet. With such a large amount of information and image available and its usefulness, an effective image retrieval system is thus greatly needed. In this paper, we present an effective approach with both image matching and indexing techniques that improvise on existing integrated image retrieval methods. This technique follows a two-phase approach, integrating query by topic and query by example specification methods. In the first phase, The topic-based image retrieval is performed by using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. This technique consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. In the second phase, we use query by example specification to perform a low-level content-based image match in order to retrieve smaller and relatively closer results of the example image. From this, information related to the image feature is automatically extracted from the query image. The main objective of our approach is to develop a functional image search and indexing technique and to demonstrate that better retrieval results can be achieved.

  16. A study on mastectomy samples to evaluate breast imaging quality and potential clinical relevance of differential phase contrast mammography.

    PubMed

    Hauser, Nik; Wang, Zhentian; Kubik-Huch, Rahel A; Trippel, Mafalda; Singer, Gad; Hohl, Michael K; Roessl, Ewald; Köhler, Thomas; van Stevendaal, Udo; Wieberneit, Nataly; Stampanoni, Marco

    2014-03-01

    Differential phase contrast and scattering-based x-ray mammography has the potential to provide additional and complementary clinically relevant information compared with absorption-based mammography. The purpose of our study was to provide a first statistical evaluation of the imaging capabilities of the new technique compared with digital absorption mammography. We investigated non-fixed mastectomy samples of 33 patients with invasive breast cancer, using grating-based differential phase contrast mammography (mammoDPC) with a conventional, low-brilliance x-ray tube. We simultaneously recorded absorption, differential phase contrast, and small-angle scattering signals that were combined into novel high-frequency-enhanced images with a dedicated image fusion algorithm. Six international, expert breast radiologists evaluated clinical digital and experimental mammograms in a 2-part blinded, prospective independent reader study. The results were statistically analyzed in terms of image quality and clinical relevance. The results of the comparison of mammoDPC with clinical digital mammography revealed the general quality of the images to be significantly superior (P < 0.001); sharpness, lesion delineation, as well as the general visibility of calcifications to be significantly more assessable (P < 0.001); and delineation of anatomic components of the specimens (surface structures) to be significantly sharper (P < 0.001). Spiculations were significantly better identified, and the overall clinically relevant information provided by mammoDPC was judged to be superior (P < 0.001). Our results demonstrate that complementary information provided by phase and scattering enhanced mammograms obtained with the mammoDPC approach deliver images of generally superior quality. This technique has the potential to improve radiological breast diagnostics.

  17. Thermographic imaging of the space shuttle during re-entry using a near-infrared sensor

    NASA Astrophysics Data System (ADS)

    Zalameda, Joseph N.; Horvath, Thomas J.; Kerns, Robbie V.; Burke, Eric R.; Taylor, Jeff C.; Spisz, Tom; Gibson, David M.; Shea, Edward J.; Mercer, C. David; Schwartz, Richard J.; Tack, Steve; Bush, Brett C.; Dantowitz, Ronald F.; Kozubal, Marek J.

    2012-06-01

    High resolution calibrated near infrared (NIR) imagery of the Space Shuttle Orbiter was obtained during hypervelocity atmospheric re-entry of the STS-119, STS-125, STS-128, STS-131, STS-132, STS-133, and STS-134 missions. This data has provided information on the distribution of surface temperature and the state of the airflow over the windward surface of the Orbiter during descent. The thermal imagery complemented data collected with onboard surface thermocouple instrumentation. The spatially resolved global thermal measurements made during the Orbiter's hypersonic re-entry will provide critical flight data for reducing the uncertainty associated with present day ground-to-flight extrapolation techniques and current state-of-the-art empirical boundary-layer transition or turbulent heating prediction methods. Laminar and turbulent flight data is critical for the validation of physics-based, semi-empirical boundary-layer transition prediction methods as well as stimulating the validation of laminar numerical chemistry models and the development of turbulence models supporting NASA's next-generation spacecraft. In this paper we provide details of the NIR imaging system used on both air and land-based imaging assets. The paper will discuss calibrations performed on the NIR imaging systems that permitted conversion of captured radiant intensity (counts) to temperature values. Image processing techniques are presented to analyze the NIR data for vignetting distortion, best resolution, and image sharpness.

  18. 3D kinematics of mobile-bearing total knee arthroplasty using X-ray fluoroscopy.

    PubMed

    Yamazaki, Takaharu; Futai, Kazuma; Tomita, Tetsuya; Sato, Yoshinobu; Yoshikawa, Hideki; Tamura, Shinichi; Sugamoto, Kazuomi

    2015-04-01

    Total knee arthroplasty (TKA) 3D kinematic analysis requires 2D/3D image registration of X-ray fluoroscopic images and a computer-aided design (CAD) model of the knee implant. However, these techniques cannot provide information on the radiolucent polyethylene insert, since the insert silhouette does not appear clearly in X-ray images. Therefore, it is difficult to obtain the 3D kinematics of the polyethylene insert, particularly the mobile-bearing insert. A technique for 3D kinematic analysis of a mobile-bearing insert used in TKA was developed using X-ray fluoroscopy. The method was tested and a clinical application was evaluated. Tantalum beads and a CAD model of the mobile-bearing TKA insert are used for 3D pose estimation of the mobile-bearing insert used in TKA using X-ray fluoroscopy. The insert model was created using four identical tantalum beads precisely located at known positions in a polyethylene insert using a specially designed insertion device. Finally, the 3D pose of the insert model was estimated using a feature-based 2D/3D registration technique, using the silhouette of beads in fluoroscopic images and the corresponding CAD insert model. In vitro testing for the repeatability of the positioning of the tantalum beads and computer simulations for 3D pose estimation of the mobile-bearing insert were performed. The pose estimation accuracy achieved was sufficient for analyzing mobile-bearing TKA kinematics (RMS error: within 1.0 mm and 1.0°, except for medial-lateral translation). In a clinical application, nine patients with mobile-bearing TKA were investigated and analyzed with respect to a deep knee bending motion. A 3D kinematic analysis technique was developed that enables accurate quantitative evaluation of mobile-bearing TKA kinematics. This method may be useful for improving implant design and optimizing TKA surgical techniques.

  19. Eye gazing direction inspection based on image processing technique

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Song, Yong

    2005-02-01

    According to the research result in neural biology, human eyes can obtain high resolution only at the center of view of field. In the research of Virtual Reality helmet, we design to detect the gazing direction of human eyes in real time and feed it back to the control system to improve the resolution of the graph at the center of field of view. In the case of current display instruments, this method can both give attention to the view field of virtual scene and resolution, and improve the immersion of virtual system greatly. Therefore, detecting the gazing direction of human eyes rapidly and exactly is the basis of realizing the design scheme of this novel VR helmet. In this paper, the conventional method of gazing direction detection that based on Purklinje spot is introduced firstly. In order to overcome the disadvantage of the method based on Purklinje spot, this paper proposed a method based on image processing to realize the detection and determination of the gazing direction. The locations of pupils and shapes of eye sockets change with the gazing directions. With the aid of these changes, analyzing the images of eyes captured by the cameras, gazing direction of human eyes can be determined finally. In this paper, experiments have been done to validate the efficiency of this method by analyzing the images. The algorithm can carry out the detection of gazing direction base on normal eye image directly, and it eliminates the need of special hardware. Experiment results show that the method is easy to implement and have high precision.

  20. Raman imaging from microscopy to macroscopy: Quality and safety control of biological materials

    USDA-ARS?s Scientific Manuscript database

    Raman imaging can analyze biological materials by generating detailed chemical images. Over the last decade, tremendous advancements in Raman imaging and data analysis techniques have overcome problems such as long data acquisition and analysis times and poor sensitivity. This review article introdu...

  1. Spatially variant apodization for squinted synthetic aperture radar images.

    PubMed

    Castillo-Rubio, Carlos F; Llorente-Romano, Sergio; Burgos-García, Mateo

    2007-08-01

    Spatially variant apodization (SVA) is a nonlinear sidelobe reduction technique that improves sidelobe level and preserves resolution at the same time. This method implements a bidimensional finite impulse response filter with adaptive taps depending on image information. Some papers that have been previously published analyze SVA at the Nyquist rate or at higher rates focused on strip synthetic aperture radar (SAR). This paper shows that traditional SVA techniques are useless when the sensor operates with a squint angle. The reasons for this behaviour are analyzed, and a new implementation that largely improves the results is presented. The algorithm is applied to simulated SAR images in order to demonstrate the good quality achieved along with efficient computation.

  2. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    PubMed Central

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634

  3. Extended Field Laser Confocal Microscopy (EFLCM): combining automated Gigapixel image capture with in silico virtual microscopy.

    PubMed

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-07-16

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.

  4. Content-based image retrieval for interstitial lung diseases using classification confidence

    NASA Astrophysics Data System (ADS)

    Dash, Jatindra Kumar; Mukhopadhyay, Sudipta; Prabhakar, Nidhi; Garg, Mandeep; Khandelwal, Niranjan

    2013-02-01

    Content Based Image Retrieval (CBIR) system could exploit the wealth of High-Resolution Computed Tomography (HRCT) data stored in the archive by finding similar images to assist radiologists for self learning and differential diagnosis of Interstitial Lung Diseases (ILDs). HRCT findings of ILDs are classified into several categories (e.g. consolidation, emphysema, ground glass, nodular etc.) based on their texture like appearances. Therefore, analysis of ILDs is considered as a texture analysis problem. Many approaches have been proposed for CBIR of lung images using texture as primitive visual content. This paper presents a new approach to CBIR for ILDs. The proposed approach makes use of a trained neural network (NN) to find the output class label of query image. The degree of confidence of the NN classifier is analyzed using Naive Bayes classifier that dynamically takes a decision on the size of the search space to be used for retrieval. The proposed approach is compared with three simple distance based and one classifier based texture retrieval approaches. Experimental results show that the proposed technique achieved highest average percentage precision of 92.60% with lowest standard deviation of 20.82%.

  5. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  6. Lorentz force electrical impedance tomography using magnetic field measurements.

    PubMed

    Zengin, Reyhan; Gençer, Nevzat Güneri

    2016-08-21

    In this study, magnetic field measurement technique is investigated to image the electrical conductivity properties of biological tissues using Lorentz forces. This technique is based on electrical current induction using ultrasound together with an applied static magnetic field. The magnetic field intensity generated due to induced currents is measured using two coil configurations, namely, a rectangular loop coil and a novel xy coil pair. A time-varying voltage is picked-up and recorded while the acoustic wave propagates along its path. The forward problem of this imaging modality is defined as calculation of the pick-up voltages due to a given acoustic excitation and known body properties. Firstly, the feasibility of the proposed technique is investigated analytically. The basic field equations governing the behaviour of time-varying electromagnetic fields are presented. Secondly, the general formulation of the partial differential equations for the scalar and magnetic vector potentials are derived. To investigate the feasibility of this technique, numerical studies are conducted using a finite element method based software. To sense the pick-up voltages a novel coil configuration (xy coil pairs) is proposed. Two-dimensional numerical geometry with a 16-element linear phased array (LPA) ultrasonic transducer (1 MHz) and a conductive body (breast fat) with five tumorous tissues is modeled. The static magnetic field is assumed to be 4 Tesla. To understand the performance of the imaging system, the sensitivity matrix is analyzed. The sensitivity matrix is obtained for two different locations of LPA transducer with eleven steering angles from [Formula: see text] to [Formula: see text] at intervals of [Formula: see text]. The characteristics of the imaging system are shown with the singular value decomposition (SVD) of the sensitivity matrix. The images are reconstructed with the truncated SVD algorithm. The signal-to-noise ratio in measurements is assumed 80 dB. Simulation studies based on the sensitivity matrix analysis reveal that perturbations with [Formula: see text] mm size can be detected up to a 3.5 cm depth.

  7. Lorentz force electrical impedance tomography using magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Zengin, Reyhan; Güneri Gençer, Nevzat

    2016-08-01

    In this study, magnetic field measurement technique is investigated to image the electrical conductivity properties of biological tissues using Lorentz forces. This technique is based on electrical current induction using ultrasound together with an applied static magnetic field. The magnetic field intensity generated due to induced currents is measured using two coil configurations, namely, a rectangular loop coil and a novel xy coil pair. A time-varying voltage is picked-up and recorded while the acoustic wave propagates along its path. The forward problem of this imaging modality is defined as calculation of the pick-up voltages due to a given acoustic excitation and known body properties. Firstly, the feasibility of the proposed technique is investigated analytically. The basic field equations governing the behaviour of time-varying electromagnetic fields are presented. Secondly, the general formulation of the partial differential equations for the scalar and magnetic vector potentials are derived. To investigate the feasibility of this technique, numerical studies are conducted using a finite element method based software. To sense the pick-up voltages a novel coil configuration (xy coil pairs) is proposed. Two-dimensional numerical geometry with a 16-element linear phased array (LPA) ultrasonic transducer (1 MHz) and a conductive body (breast fat) with five tumorous tissues is modeled. The static magnetic field is assumed to be 4 Tesla. To understand the performance of the imaging system, the sensitivity matrix is analyzed. The sensitivity matrix is obtained for two different locations of LPA transducer with eleven steering angles from -{{25}\\circ} to {{25}\\circ} at intervals of {{5}\\circ} . The characteristics of the imaging system are shown with the singular value decomposition (SVD) of the sensitivity matrix. The images are reconstructed with the truncated SVD algorithm. The signal-to-noise ratio in measurements is assumed 80 dB. Simulation studies based on the sensitivity matrix analysis reveal that perturbations with 5~\\text{mm}× 5 mm size can be detected up to a 3.5 cm depth.

  8. Supporting flight data analysis for Space Shuttle Orbiter Experiments at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Green, M. J.; Budnick, M. P.; Yang, L.; Chiasson, M. P.

    1983-01-01

    The Space Shuttle Orbiter Experiments program in responsible for collecting flight data to extend the research and technology base for future aerospace vehicle design. The Infrared Imagery of Shuttle (IRIS), Catalytic Surface Effects, and Tile Gap Heating experiments sponsored by Ames Research Center are part of this program. The paper describes the software required to process the flight data which support these experiments. In addition, data analysis techniques, developed in support of the IRIS experiment, are discussed. Using the flight data base, the techniques have provided information useful in analyzing and correcting problems with the experiment, and in interpreting the IRIS image obtained during the entry of the third Shuttle mission.

  9. Supporting flight data analysis for Space Shuttle Orbiter experiments at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Green, M. J.; Budnick, M. P.; Yang, L.; Chiasson, M. P.

    1983-01-01

    The space shuttle orbiter experiments program is responsible for collecting flight data to extend the research and technology base for future aerospace vehicle design. The infrared imagery of shuttle (IRIS), catalytic surface effects, and tile gap heating experiments sponsored by Ames Research Center are part of this program. The software required to process the flight data which support these experiments is described. In addition, data analysis techniques, developed in support of the IRIS experiment, are discussed. Using the flight data base, the techniques provide information useful in analyzing and correcting problems with the experiment, and in interpreting the IRIS image obtained during the entry of the third shuttle mission.

  10. SU-F-J-177: A Novel Image Analysis Technique (center Pixel Method) to Quantify End-To-End Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, N; Chetty, I; Snyder, K

    Purpose: To implement a novel image analysis technique, “center pixel method”, to quantify end-to-end tests accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy was determined by delivering radiation to an end-to-end prototype phantom. The phantom was scanned with 0.8 mm slice thickness. The treatment isocenter was placed at the center of the phantom. In the treatment room, CBCT images of the phantom (kVp=77, mAs=1022, slice thickness 1 mm) were acquired to register to the reference CT images. 6D couch correction were applied based on the registration results. Electronic Portal Imaging Device (EPID)-based Winston Lutz (WL)more » tests were performed to quantify the errors of the targeting accuracy of the system at 15 combinations of gantry, collimator and couch positions. The images were analyzed using two different methods. a) The classic method. The deviation was calculated by measuring the radial distance between the center of the central BB and the full width at half maximum of the radiation field. b) The center pixel method. Since the imager projection offset from the treatment isocenter was known from the IsoCal calibration, the deviation was determined between the center of the BB and the central pixel of the imager panel. Results: Using the automatic registration method to localize the phantom and the classic method of measuring the deviation of the BB center, the mean and standard deviation of the radial distance was 0.44 ± 0.25, 0.47 ± 0.26, and 0.43 ± 0.13 mm for the jaw, MLC and cone defined field sizes respectively. When the center pixel method was used, the mean and standard deviation was 0.32 ± 0.18, 0.32 ± 0.17, and 0.32 ± 0.19 mm respectively. Conclusion: Our results demonstrated that the center pixel method accurately analyzes the WL images to evaluate the targeting accuracy of the radiosurgery system. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American Cancer Society.« less

  11. Review of Random Phase Encoding in Volume Holographic Storage

    PubMed Central

    Su, Wei-Chia; Sun, Ching-Cherng

    2012-01-01

    Random phase encoding is a unique technique for volume hologram which can be applied to various applications such as holographic multiplexing storage, image encryption, and optical sensing. In this review article, we first review and discuss diffraction selectivity of random phase encoding in volume holograms, which is the most important parameter related to multiplexing capacity of volume holographic storage. We then review an image encryption system based on random phase encoding. The alignment of phase key for decryption of the encoded image stored in holographic memory is analyzed and discussed. In the latter part of the review, an all-optical sensing system implemented by random phase encoding and holographic interconnection is presented.

  12. Doctoral theses in diagnostic imaging: a study of Spanish production between 1976 and 2011.

    PubMed

    Machan, K; Sendra Portero, F

    2018-05-15

    To analyze the production of doctoral theses in diagnostic imaging in Spain in the period comprising 1976 through 2011 with the aim of a) determining the number of theses and their distribution over time, b) describing the production in terms of universities and directors, and c) analyzing the content of the theses according to the imaging technique, anatomic site, and type of research used. The TESEO database was searched for "radiología" and/or "diagnóstico por imagen" and for terms related to diagnostic imaging in the title of the thesis. A total of 1036 theses related to diagnostic imaging were produced in 37 Spanish universities (mean, 29.6 theses/year; range, 4-59). A total of 963 thesis directors were identified; 10 of these supervised 10 or more theses. Most candidates and directors were men, although since the 2000-2001 academic year the number of male and female candidates has been similar. The anatomic regions most often included in diagnostic imaging theses were the abdomen (22.5%), musculoskeletal system (21.8%), central nervous system (16.4%), and neck and face (15.6%). The imaging techniques most often included were ultrasonography in the entire period (25.5%) and magnetic resonance imaging in the last 5 years. Most theses (63.8%) were related to clinical research. Despite certain limitations, the TESEO database makes it possible to analyze the production of doctoral theses in Spain effectively. The annual mean production of theses in diagnostic imaging is higher than in other medical specialties. This analysis reflects the historic evolution of imaging techniques and research in radiology as well as the development of Spanish universities. Copyright © 2018 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. Spotting Cheetahs: Identifying Individuals by Their Footprints.

    PubMed

    Jewell, Zoe C; Alibhai, Sky K; Weise, Florian; Munro, Stuart; Van Vuuren, Marlice; Van Vuuren, Rudie

    2016-05-01

    The cheetah (Acinonyx jubatus) is Africa's most endangered large felid and listed as Vulnerable with a declining population trend by the IUCN(1). It ranges widely over sub-Saharan Africa and in parts of the Middle East. Cheetah conservationists face two major challenges, conflict with landowners over the killing of domestic livestock, and concern over range contraction. Understanding of the latter remains particularly poor(2). Namibia is believed to support the largest number of cheetahs of any range country, around 30%, but estimates range from 2,905(3) to 13,520(4). The disparity is likely a result of the different techniques used in monitoring. Current techniques, including invasive tagging with VHF or satellite/GPS collars, can be costly and unreliable. The footprint identification technique(5) is a new tool accessible to both field scientists and also citizens with smartphones, who could potentially augment data collection. The footprint identification technique analyzes digital images of footprints captured according to a standardized protocol. Images are optimized and measured in data visualization software. Measurements of distances, angles, and areas of the footprint images are analyzed using a robust cross-validated pairwise discriminant analysis based on a customized model. The final output is in the form of a Ward's cluster dendrogram. A user-friendly graphic user interface (GUI) allows the user immediate access and clear interpretation of classification results. The footprint identification technique algorithms are species specific because each species has a unique anatomy. The technique runs in a data visualization software, using its own scripting language (jsl) that can be customized for the footprint anatomy of any species. An initial classification algorithm is built from a training database of footprints from that species, collected from individuals of known identity. An algorithm derived from a cheetah of known identity is then able to classify free-ranging cheetahs of unknown identity. The footprint identification technique predicts individual cheetah identity with an accuracy of >90%.

  14. Spotting Cheetahs: Identifying Individuals by Their Footprints

    PubMed Central

    Jewell, Zoe C.; Alibhai, Sky K.; Weise, Florian; Munro, Stuart; Van Vuuren, Marlice; Van Vuuren, Rudie

    2016-01-01

    The cheetah (Acinonyx jubatus) is Africa's most endangered large felid and listed as Vulnerable with a declining population trend by the IUCN1. It ranges widely over sub-Saharan Africa and in parts of the Middle East. Cheetah conservationists face two major challenges, conflict with landowners over the killing of domestic livestock, and concern over range contraction. Understanding of the latter remains particularly poor2. Namibia is believed to support the largest number of cheetahs of any range country, around 30%, but estimates range from 2,9053 to 13,5204. The disparity is likely a result of the different techniques used in monitoring. Current techniques, including invasive tagging with VHF or satellite/GPS collars, can be costly and unreliable. The footprint identification technique5 is a new tool accessible to both field scientists and also citizens with smartphones, who could potentially augment data collection. The footprint identification technique analyzes digital images of footprints captured according to a standardized protocol. Images are optimized and measured in data visualization software. Measurements of distances, angles, and areas of the footprint images are analyzed using a robust cross-validated pairwise discriminant analysis based on a customized model. The final output is in the form of a Ward's cluster dendrogram. A user-friendly graphic user interface (GUI) allows the user immediate access and clear interpretation of classification results. The footprint identification technique algorithms are species specific because each species has a unique anatomy. The technique runs in a data visualization software, using its own scripting language (jsl) that can be customized for the footprint anatomy of any species. An initial classification algorithm is built from a training database of footprints from that species, collected from individuals of known identity. An algorithm derived from a cheetah of known identity is then able to classify free-ranging cheetahs of unknown identity. The footprint identification technique predicts individual cheetah identity with an accuracy of >90%. PMID:27167035

  15. Calibration and testing of a Raman hyperspectral imaging system to reveal powdered food adulteration

    PubMed Central

    Lohumi, Santosh; Lee, Hoonsoo; Kim, Moon S.; Qin, Jianwei; Kandpal, Lalit Mohan; Bae, Hyungjin; Rahman, Anisur

    2018-01-01

    The potential adulteration of foodstuffs has led to increasing concern regarding food safety and security, in particular for powdered food products where cheap ground materials or hazardous chemicals can be added to increase the quantity of powder or to obtain the desired aesthetic quality. Due to the resulting potential health threat to consumers, the development of a fast, label-free, and non-invasive technique for the detection of adulteration over a wide range of food products is necessary. We therefore report the development of a rapid Raman hyperspectral imaging technique for the detection of food adulteration and for authenticity analysis. The Raman hyperspectral imaging system comprises of a custom designed laser illumination system, sensing module, and a software interface. Laser illumination system generates a 785 nm laser line of high power, and the Gaussian like intensity distribution of laser beam is shaped by incorporating an engineered diffuser. The sensing module utilize Rayleigh filters, imaging spectrometer, and detector for collection of the Raman scattering signals along the laser line. A custom-built software to acquire Raman hyperspectral images which also facilitate the real time visualization of Raman chemical images of scanned samples. The developed system was employed for the simultaneous detection of Sudan dye and Congo red dye adulteration in paprika powder, and benzoyl peroxide and alloxan monohydrate adulteration in wheat flour at six different concentrations (w/w) from 0.05 to 1%. The collected Raman imaging data of the adulterated samples were analyzed to visualize and detect the adulterant concentrations by generating a binary image for each individual adulterant material. The results obtained based on the Raman chemical images of adulterants showed a strong correlation (R>0.98) between added and pixel based calculated concentration of adulterant materials. This developed Raman imaging system thus, can be considered as a powerful analytical technique for the quality and authenticity analysis of food products. PMID:29708973

  16. Calibration and testing of a Raman hyperspectral imaging system to reveal powdered food adulteration.

    PubMed

    Lohumi, Santosh; Lee, Hoonsoo; Kim, Moon S; Qin, Jianwei; Kandpal, Lalit Mohan; Bae, Hyungjin; Rahman, Anisur; Cho, Byoung-Kwan

    2018-01-01

    The potential adulteration of foodstuffs has led to increasing concern regarding food safety and security, in particular for powdered food products where cheap ground materials or hazardous chemicals can be added to increase the quantity of powder or to obtain the desired aesthetic quality. Due to the resulting potential health threat to consumers, the development of a fast, label-free, and non-invasive technique for the detection of adulteration over a wide range of food products is necessary. We therefore report the development of a rapid Raman hyperspectral imaging technique for the detection of food adulteration and for authenticity analysis. The Raman hyperspectral imaging system comprises of a custom designed laser illumination system, sensing module, and a software interface. Laser illumination system generates a 785 nm laser line of high power, and the Gaussian like intensity distribution of laser beam is shaped by incorporating an engineered diffuser. The sensing module utilize Rayleigh filters, imaging spectrometer, and detector for collection of the Raman scattering signals along the laser line. A custom-built software to acquire Raman hyperspectral images which also facilitate the real time visualization of Raman chemical images of scanned samples. The developed system was employed for the simultaneous detection of Sudan dye and Congo red dye adulteration in paprika powder, and benzoyl peroxide and alloxan monohydrate adulteration in wheat flour at six different concentrations (w/w) from 0.05 to 1%. The collected Raman imaging data of the adulterated samples were analyzed to visualize and detect the adulterant concentrations by generating a binary image for each individual adulterant material. The results obtained based on the Raman chemical images of adulterants showed a strong correlation (R>0.98) between added and pixel based calculated concentration of adulterant materials. This developed Raman imaging system thus, can be considered as a powerful analytical technique for the quality and authenticity analysis of food products.

  17. Detection of small-amplitude periodic surface pressure fluctuation by pressure-sensitive paint measurements using frequency-domain methods

    NASA Astrophysics Data System (ADS)

    Noda, Takahiro; Nakakita, Kazuyki; Wakahara, Masaki; Kameda, Masaharu

    2018-06-01

    Image measurement using pressure-sensitive paint (PSP) is an effective tool for analyzing the unsteady pressure field on the surface of a body in a low-speed air flow, which is associated with wind noise. In this study, the surface pressure fluctuation due to the tonal trailing edge (TE) noise for a two-dimensional NACA 0012 airfoil was quantitatively detected using a porous anodized aluminum PSP (AA-PSP). The emission from the PSP upon illumination by a blue laser diode was captured using a 12-bit high-speed complementary metal-oxide-semiconductor (CMOS) camera. The intensities of the captured images were converted to pressures using a standard intensity-based method. Three image-processing methods based on the fast Fourier transform (FFT) were tested to determine their efficiency in improving the signal-to-noise ratio (SNR) of the unsteady PSP data. In addition to two fundamental FFT techniques (the full data and ensemble averaging FFTs), a technique using the coherent output power (COP), which involves the cross correlation between the PSP data and the signal measured using a pointwise sound-level meter, was tested. Preliminary tests indicated that random photon shot noise dominates the intensity fluctuations in the captured PSP emissions above 200 Hz. Pressure fluctuations associated with the TE noise, whose dominant frequency is approximately 940 Hz, were successfully measured by analyzing 40,960 sequential PSP images recorded at 10 kfps. Quantitative validation using the power spectrum indicates that the COP technique is the most effective method of identification of the pressure fluctuation directly related to TE noise. It is possible to distinguish power differences with a resolution of 10 Pa^2 (4 Pa in amplitude) when the COP was employed without use of another wind-off data. This resolution cannot be achieved by the ensemble averaging FFT because of an insufficient elimination of the background noise.

  18. A novel methodology for querying web images

    NASA Astrophysics Data System (ADS)

    Prabhakara, Rashmi; Lee, Ching Cheng

    2005-01-01

    Ever since the advent of Internet, there has been an immense growth in the amount of image data that is available on the World Wide Web. With such a magnitude of image availability, an efficient and effective image retrieval system is required to make use of this information. This research presents an effective image matching and indexing technique that improvises on existing integrated image retrieval methods. The proposed technique follows a two-phase approach, integrating query by topic and query by example specification methods. The first phase consists of topic-based image retrieval using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. It consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. The second phase uses the query by example specification to perform a low-level content-based image match for the retrieval of smaller and relatively closer results of the example image. Information related to the image feature is automatically extracted from the query image by the image processing system. A technique that is not computationally intensive based on color feature is used to perform content-based matching of images. The main goal is to develop a functional image search and indexing system and to demonstrate that better retrieval results can be achieved with this proposed hybrid search technique.

  19. A novel methodology for querying web images

    NASA Astrophysics Data System (ADS)

    Prabhakara, Rashmi; Lee, Ching Cheng

    2004-12-01

    Ever since the advent of Internet, there has been an immense growth in the amount of image data that is available on the World Wide Web. With such a magnitude of image availability, an efficient and effective image retrieval system is required to make use of this information. This research presents an effective image matching and indexing technique that improvises on existing integrated image retrieval methods. The proposed technique follows a two-phase approach, integrating query by topic and query by example specification methods. The first phase consists of topic-based image retrieval using an improved text information retrieval (IR) technique that makes use of the structured format of HTML documents. It consists of a focused crawler that not only provides for the user to enter the keyword for the topic-based search but also, the scope in which the user wants to find the images. The second phase uses the query by example specification to perform a low-level content-based image match for the retrieval of smaller and relatively closer results of the example image. Information related to the image feature is automatically extracted from the query image by the image processing system. A technique that is not computationally intensive based on color feature is used to perform content-based matching of images. The main goal is to develop a functional image search and indexing system and to demonstrate that better retrieval results can be achieved with this proposed hybrid search technique.

  20. Quantum red-green-blue image steganography

    NASA Astrophysics Data System (ADS)

    Heidari, Shahrokh; Pourarian, Mohammad Rasoul; Gheibi, Reza; Naseri, Mosayeb; Houshmand, Monireh

    One of the most considering matters in the field of quantum information processing is quantum data hiding including quantum steganography and quantum watermarking. This field is an efficient tool for protecting any kind of digital data. In this paper, three quantum color images steganography algorithms are investigated based on Least Significant Bit (LSB). The first algorithm employs only one of the image’s channels to cover secret data. The second procedure is based on LSB XORing technique, and the last algorithm utilizes two channels to cover the color image for hiding secret quantum data. The performances of the proposed schemes are analyzed by using software simulations in MATLAB environment. The analysis of PSNR, BER and Histogram graphs indicate that the presented schemes exhibit acceptable performances and also theoretical analysis demonstrates that the networks complexity of the approaches scales squarely.

  1. Steganographic optical image encryption system based on reversible data hiding and double random phase encoding

    NASA Astrophysics Data System (ADS)

    Chuang, Cheng-Hung; Chen, Yen-Lin

    2013-02-01

    This study presents a steganographic optical image encryption system based on reversible data hiding and double random phase encoding (DRPE) techniques. Conventional optical image encryption systems can securely transmit valuable images using an encryption method for possible application in optical transmission systems. The steganographic optical image encryption system based on the DRPE technique has been investigated to hide secret data in encrypted images. However, the DRPE techniques vulnerable to attacks and many of the data hiding methods in the DRPE system can distort the decrypted images. The proposed system, based on reversible data hiding, uses a JBIG2 compression scheme to achieve lossless decrypted image quality and perform a prior encryption process. Thus, the DRPE technique enables a more secured optical encryption process. The proposed method extracts and compresses the bit planes of the original image using the lossless JBIG2 technique. The secret data are embedded in the remaining storage space. The RSA algorithm can cipher the compressed binary bits and secret data for advanced security. Experimental results show that the proposed system achieves a high data embedding capacity and lossless reconstruction of the original images.

  2. Lab on a CD.

    PubMed

    Madou, Marc; Zoval, Jim; Jia, Guangyao; Kido, Horacio; Kim, Jitae; Kim, Nahui

    2006-01-01

    In this paper, centrifuge-based microfluidic platforms are reviewed and compared with other popular microfluidic propulsion methods. The underlying physical principles of centrifugal pumping in microfluidic systems are presented and the various centrifuge fluidic functions, such as valving, decanting, calibration, mixing, metering, heating, sample splitting, and separation, are introduced. Those fluidic functions have been combined with analytical measurement techniques, such as optical imaging, absorbance, and fluorescence spectroscopy and mass spectrometry, to make the centrifugal platform a powerful solution for medical and clinical diagnostics and high throughput screening (HTS) in drug discovery. Applications of a compact disc (CD)-based centrifuge platform analyzed in this review include two-point calibration of an optode-based ion sensor, an automated immunoassay platform, multiple parallel screening assays, and cellular-based assays. The use of modified commercial CD drives for high-resolution optical imaging is discussed as well. From a broader perspective, we compare technical barriers involved in applying microfluidics for sensing and diagnostic use and applying such techniques to HTS. The latter poses less challenges and explains why HTS products based on a CD fluidic platform are already commercially available, whereas we might have to wait longer to see commercial CD-based diagnostics.

  3. Multispectral image sharpening using wavelet transform techniques and spatial correlation of edges

    USGS Publications Warehouse

    Lemeshewsky, George P.; Schowengerdt, Robert A.

    2000-01-01

    Several reported image fusion or sharpening techniques are based on the discrete wavelet transform (DWT). The technique described here uses a pixel-based maximum selection rule to combine respective transform coefficients of lower spatial resolution near-infrared (NIR) and higher spatial resolution panchromatic (pan) imagery to produce a sharpened NIR image. Sharpening assumes a radiometric correlation between the spectral band images. However, there can be poor correlation, including edge contrast reversals (e.g., at soil-vegetation boundaries), between the fused images and, consequently, degraded performance. To improve sharpening, a local area-based correlation technique originally reported for edge comparison with image pyramid fusion is modified for application with the DWT process. Further improvements are obtained by using redundant, shift-invariant implementation of the DWT. Example images demonstrate the improvements in NIR image sharpening with higher resolution pan imagery.

  4. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    PubMed

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  5. The cascaded moving k-means and fuzzy c-means clustering algorithms for unsupervised segmentation of malaria images

    NASA Astrophysics Data System (ADS)

    Abdul-Nasir, Aimi Salihah; Mashor, Mohd Yusoff; Halim, Nurul Hazwani Abd; Mohamed, Zeehaida

    2015-05-01

    Malaria is a life-threatening parasitic infectious disease that corresponds for nearly one million deaths each year. Due to the requirement of prompt and accurate diagnosis of malaria, the current study has proposed an unsupervised pixel segmentation based on clustering algorithm in order to obtain the fully segmented red blood cells (RBCs) infected with malaria parasites based on the thin blood smear images of P. vivax species. In order to obtain the segmented infected cell, the malaria images are first enhanced by using modified global contrast stretching technique. Then, an unsupervised segmentation technique based on clustering algorithm has been applied on the intensity component of malaria image in order to segment the infected cell from its blood cells background. In this study, cascaded moving k-means (MKM) and fuzzy c-means (FCM) clustering algorithms has been proposed for malaria slide image segmentation. After that, median filter algorithm has been applied to smooth the image as well as to remove any unwanted regions such as small background pixels from the image. Finally, seeded region growing area extraction algorithm has been applied in order to remove large unwanted regions that are still appeared on the image due to their size in which cannot be cleaned by using median filter. The effectiveness of the proposed cascaded MKM and FCM clustering algorithms has been analyzed qualitatively and quantitatively by comparing the proposed cascaded clustering algorithm with MKM and FCM clustering algorithms. Overall, the results indicate that segmentation using the proposed cascaded clustering algorithm has produced the best segmentation performances by achieving acceptable sensitivity as well as high specificity and accuracy values compared to the segmentation results provided by MKM and FCM algorithms.

  6. Integration of 3D multimodal imaging data of a head and neck cancer and advanced feature recognition.

    PubMed

    Lotz, Judith M; Hoffmann, Franziska; Lotz, Johannes; Heldmann, Stefan; Trede, Dennis; Oetjen, Janina; Becker, Michael; Ernst, Günther; Maas, Peter; Alexandrov, Theodore; Guntinas-Lichius, Orlando; Thiele, Herbert; von Eggeling, Ferdinand

    2017-07-01

    In the last years, matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI MSI) became an imaging technique which has the potential to characterize complex tumor tissue. The combination with other modalities and with standard histology techniques was achieved by the use of image registration methods and enhances analysis possibilities. We analyzed an oral squamous cell carcinoma with up to 162 consecutive sections with MALDI MSI, hematoxylin and eosin (H&E) staining and immunohistochemistry (IHC) against CD31. Spatial segmentation maps of the MALDI MSI data were generated by similarity-based clustering of spectra. Next, the maps were overlaid with the H&E microscopy images and the results were interpreted by an experienced pathologist. Image registration was used to fuse both modalities and to build a three-dimensional (3D) model. To visualize structures below resolution of MALDI MSI, IHC was carried out for CD31 and results were embedded additionally. The integration of 3D MALDI MSI data with H&E and IHC images allows a correlation between histological and molecular information leading to a better understanding of the functional heterogeneity of tumors. This article is part of a Special Issue entitled: MALDI Imaging, edited by Dr. Corinna Henkel and Prof. Peter Hoffmann. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Numerical simulations of imaging satellites with optical interferometry

    NASA Astrophysics Data System (ADS)

    Ding, Yuanyuan; Wang, Chaoyan; Chen, Zhendong

    2015-08-01

    Optical interferometry imaging system, which is composed of multiple sub-apertures, is a type of sensor that can break through the aperture limit and realize the high resolution imaging. This technique can be utilized to precisely measure the shapes, sizes and position of astronomical objects and satellites, it also can realize to space exploration and space debris, satellite monitoring and survey. Fizeau-Type optical aperture synthesis telescope has the advantage of short baselines, common mount and multiple sub-apertures, so it is feasible for instantaneous direct imaging through focal plane combination.Since 2002, the researchers of Shanghai Astronomical Observatory have developed the study of optical interferometry technique. For array configurations, there are two optimal array configurations proposed instead of the symmetrical circular distribution: the asymmetrical circular distribution and the Y-type distribution. On this basis, two kinds of structure were proposed based on Fizeau interferometric telescope. One is Y-type independent sub-aperture telescope, the other one is segmented mirrors telescope with common secondary mirror.In this paper, we will give the description of interferometric telescope and image acquisition. Then we will mainly concerned the simulations of image restoration based on Y-type telescope and segmented mirrors telescope. The Richardson-Lucy (RL) method, Winner method and the Ordered Subsets Expectation Maximization (OS-EM) method are studied in this paper. We will analyze the influence of different stop rules too. At the last of the paper, we will present the reconstruction results of images of some satellites.

  8. Task-based strategy for optimized contrast enhanced breast imaging: analysis of six imaging techniques for mammography and tomosynthesis

    NASA Astrophysics Data System (ADS)

    Ikejimba, Lynda; Kiarashi, Nooshin; Lin, Yuan; Chen, Baiyu; Ghate, Sujata V.; Zerhouni, Moustafa; Samei, Ehsan; Lo, Joseph Y.

    2012-03-01

    Digital breast tomosynthesis (DBT) is a novel x-ray imaging technique that provides 3D structural information of the breast. In contrast to 2D mammography, DBT minimizes tissue overlap potentially improving cancer detection and reducing number of unnecessary recalls. The addition of a contrast agent to DBT and mammography for lesion enhancement has the benefit of providing functional information of a lesion, as lesion contrast uptake and washout patterns may help differentiate between benign and malignant tumors. This study used a task-based method to determine the optimal imaging approach by analyzing six imaging paradigms in terms of their ability to resolve iodine at a given dose: contrast enhanced mammography and tomosynthesis, temporal subtraction mammography and tomosynthesis, and dual energy subtraction mammography and tomosynthesis. Imaging performance was characterized using a detectability index d', derived from the system task transfer function (TTF), an imaging task, iodine contrast, and the noise power spectrum (NPS). The task modeled a 5 mm lesion containing iodine concentrations between 2.1 mg/cc and 8.6 mg/cc. TTF was obtained using an edge phantom, and the NPS was measured over several exposure levels, energies, and target-filter combinations. Using a structured CIRS phantom, d' was generated as a function of dose and iodine concentration. In general, higher dose gave higher d', but for the lowest iodine concentration and lowest dose, dual energy subtraction tomosynthesis and temporal subtraction tomosynthesis demonstrated the highest performance.

  9. An interactive web-based system using cloud for large-scale visual analytics

    NASA Astrophysics Data System (ADS)

    Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.

    2015-03-01

    Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.

  10. Optical image transformation and encryption by phase-retrieval-based double random-phase encoding and compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng; Yang, Yangrui; Liu, Xuemei; Zhou, Xin; Wei, Zhenzhuo

    2018-01-01

    An optical image transformation and encryption scheme is proposed based on double random-phase encoding (DRPE) and compressive ghost imaging (CGI) techniques. In this scheme, a secret image is first transformed into a binary image with the phase-retrieval-based DRPE technique, and then encoded by a series of random amplitude patterns according to the ghost imaging (GI) principle. Compressive sensing, corrosion and expansion operations are implemented to retrieve the secret image in the decryption process. This encryption scheme takes the advantage of complementary capabilities offered by the phase-retrieval-based DRPE and GI-based encryption techniques. That is the phase-retrieval-based DRPE is used to overcome the blurring defect of the decrypted image in the GI-based encryption, and the CGI not only reduces the data amount of the ciphertext, but also enhances the security of DRPE. Computer simulation results are presented to verify the performance of the proposed encryption scheme.

  11. Slit-scanning differential phase-contrast mammography: first experimental results

    NASA Astrophysics Data System (ADS)

    Roessl, Ewald; Daerr, Heiner; Koehler, Thomas; Martens, Gerhard; van Stevendaal, Udo

    2014-03-01

    The demands for a large field-of-view (FOV) and the stringent requirements for a stable acquisition geometry rank among the major obstacles for the translation of grating-based, differential phase-contrast techniques from the laboratory to clinical applications. While for state-of-the-art Full-Field-Digital Mammography (FFDM) FOVs of 24 cm x 30 cm are common practice, the specifications for mechanical stability are naturally derived from the detector pixel size which ranges between 50 and 100 μm. However, in grating-based, phasecontrast imaging, the relative placement of the gratings in the interferometer must be guaranteed to within micro-meter precision. In this work we report on first experimental results on a phase-contrast x-ray imaging system based on the Philips MicroDose L30 mammography unit. With the proposed approach we achieve a FOV of about 65 mm x 175 mm by the use of the slit-scanning technique. The demand for mechanical stability on a micrometer scale was relaxed by the specific interferometer design, i.e., a rigid, actuator-free mount of the phase-grating G1 with respect to the analyzer-grating G2 onto a common steel frame. The image acquisition and formation processes are described and first phase-contrast images of a test object are presented. A brief discussion of the shortcomings of the current approach is given, including the level of remaining image artifacts and the relatively inefficient usage of the total available x-ray source output.

  12. Cognitive approaches for patterns analysis and security applications

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Ogiela, Lidia

    2017-08-01

    In this paper will be presented new opportunities for developing innovative solutions for semantic pattern classification and visual cryptography, which will base on cognitive and bio-inspired approaches. Such techniques can be used for evaluation of the meaning of analyzed patterns or encrypted information, and allow to involve such meaning into the classification task or encryption process. It also allows using some crypto-biometric solutions to extend personalized cryptography methodologies based on visual pattern analysis. In particular application of cognitive information systems for semantic analysis of different patterns will be presented, and also a novel application of such systems for visual secret sharing will be described. Visual shares for divided information can be created based on threshold procedure, which may be dependent on personal abilities to recognize some image details visible on divided images.

  13. Image analysis and modeling in medical image computing. Recent developments and advances.

    PubMed

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body. Hence, model-based image computing methods are important tools to improve medical diagnostics and patient treatment in future.

  14. Three-dimensional volume rendering of the ankle based on magnetic resonance images enables the generation of images comparable to real anatomy.

    PubMed

    Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio

    2009-11-01

    We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon-bone-muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18-30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data.

  15. Three-dimensional volume rendering of the ankle based on magnetic resonance images enables the generation of images comparable to real anatomy

    PubMed Central

    Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio

    2009-01-01

    We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon–bone–muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18–30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data. PMID:19678857

  16. Flight State Information Inference with Application to Helicopter Cockpit Video Data Analysis Using Data Mining Techniques

    NASA Astrophysics Data System (ADS)

    Shin, Sanghyun

    The National Transportation Safety Board (NTSB) has recently emphasized the importance of analyzing flight data as one of the most effective methods to improve eciency and safety of helicopter operations. By analyzing flight data with Flight Data Monitoring (FDM) programs, the safety and performance of helicopter operations can be evaluated and improved. In spite of the NTSB's effort, the safety of helicopter operations has not improved at the same rate as the safety of worldwide airlines, and the accident rate of helicopters continues to be much higher than that of fixed-wing aircraft. One of the main reasons is that the participation rates of the rotorcraft industry in the FDM programs are low due to the high costs of the Flight Data Recorder (FDR), the need of a special readout device to decode the FDR, anxiety of punitive action, etc. Since a video camera is easily installed, accessible, and inexpensively maintained, cockpit video data could complement the FDR in the presence of the FDR or possibly replace the role of the FDR in the absence of the FDR. Cockpit video data is composed of image and audio data: image data contains outside views through cockpit windows and activities on the flight instrument panels, whereas audio data contains sounds of the alarms within the cockpit. The goal of this research is to develop, test, and demonstrate a cockpit video data analysis algorithm based on data mining and signal processing techniques that can help better understand situations in the cockpit and the state of a helicopter by efficiently and accurately inferring the useful flight information from cockpit video data. Image processing algorithms based on data mining techniques are proposed to estimate a helicopter's attitude such as the bank and pitch angles, identify indicators from a flight instrument panel, and read the gauges and the numbers in the analogue gauge indicators and digital displays from cockpit image data. In addition, an audio processing algorithm based on signal processing and abrupt change detection techniques is proposed to identify types of warning alarms and to detect the occurrence times of individual alarms from cockpit audio data. Those proposed algorithms are then successfully applied to simulated and real helicopter cockpit video data to demonstrate and validate their performance.

  17. Aggregation induced emission enhancement (AIEE) characteristics of quinoline based compound - A versatile fluorescent probe for pH, Fe(III) ion, BSA binding and optical cell imaging

    NASA Astrophysics Data System (ADS)

    Manikandan, Irulappan; Chang, Chien-Huei; Chen, Chia-Ling; Sathish, Veerasamy; Li, Wen-Shan; Malathi, Mahalingam

    2017-07-01

    Novel benzimidazoquinoline derivative (AVT) was synthesized through a substitution reaction and characterized by various spectral techniques. Analyzing the optical properties of AVT under absorption and emission spectral studies in different environments exclusively with respect to solvents and pH, intriguing characteristics viz. aggregation induced emission enhancement (AIEE) in the THF solvent and 'On-Off' pH sensing were found at neutral pH. Sensing nature of AVT with diverse metal ions and bovine serum albumin (BSA) was also studied. Among the metal ions, Fe3 + ion alone tunes the fluorescence intensity of AVT probe in aqueous medium from ;turn-on; to ;turn-off; through ligand (probe) to metal charge transfer (LMCT) mechanism. The probe AVT in aqueous medium interacts strongly with BSA due to Fluorescence Resonance Energy Transfer (FRET) and the conformational change in BSA was further analyzed using synchronous fluorescence techniques. Docking study of AVT with BSA reveals that the active site of binding is tryptophan residue which is also supported by the experimental results. Interestingly, fluorescent AVT probe in cells was examined through cellular imaging studies using BT-549 and MDA-MB-231 cells. Thus, the single molecule probe based detection of multiple species and stimuli were described.

  18. Development of measurement system for gauge block interferometer

    NASA Astrophysics Data System (ADS)

    Chomkokard, S.; Jinuntuya, N.; Wongkokua, W.

    2017-09-01

    We developed a measurement system for collecting and analyzing the fringe pattern images from a gauge block interferometer. The system was based on Raspberry Pi which is an open source system with python programming and opencv image manipulation library. The images were recorded by the Raspberry Pi camera with five-megapixel capacity. The noise of images was suppressed for the best result in analyses. The low noise images were processed to find the edge of fringe patterns using the contour technique for the phase shift analyses. We tested our system with the phase shift patterns between a gauge block and a reference plate. The phase shift patterns were measured by a Twyman-Green type of interferometer using the He-Ne laser with the temperature controlled at 20.0 °C. The results of the measurement will be presented and discussed.

  19. Dynamic Photorefractive Memory and its Application for Opto-Electronic Neural Networks.

    NASA Astrophysics Data System (ADS)

    Sasaki, Hironori

    This dissertation describes the analysis of the photorefractive crystal dynamics and its application for opto-electronic neural network systems. The realization of the dynamic photorefractive memory is investigated in terms of the following aspects: fast memory update, uniform grating multiplexing schedules and the prevention of the partial erasure of existing gratings. The fast memory update is realized by the selective erasure process that superimposes a new grating on the original one with an appropriate phase shift. The dynamics of the selective erasure process is analyzed using the first-order photorefractive material equations and experimentally confirmed. The effects of beam coupling and fringe bending on the selective erasure dynamics are also analyzed by numerically solving a combination of coupled wave equations and the photorefractive material equation. Incremental recording technique is proposed as a uniform grating multiplexing schedule and compared with the conventional scheduled recording technique in terms of phase distribution in the presence of an external dc electric field, as well as the image gray scale dependence. The theoretical analysis and experimental results proved the superiority of the incremental recording technique over the scheduled recording. Novel recirculating information memory architecture is proposed and experimentally demonstrated to prevent partial degradation of the existing gratings by accessing the memory. Gratings are circulated through a memory feed back loop based on the incremental recording dynamics and demonstrate robust read/write/erase capabilities. The dynamic photorefractive memory is applied to opto-electronic neural network systems. Module architecture based on the page-oriented dynamic photorefractive memory is proposed. This module architecture can implement two complementary interconnection organizations, fan-in and fan-out. The module system scalability and the learning capabilities are theoretically investigated using the photorefractive dynamics described in previous chapters of the dissertation. The implementation of the feed-forward image compression network with 900 input and 9 output neurons with 6-bit interconnection accuracy is experimentally demonstrated. Learning of the Perceptron network that determines sex based on input face images of 900 pixels is also successfully demonstrated.

  20. Three-dimensional hard and soft tissue imaging of the human cochlea by scanning laser optical tomography (SLOT)

    PubMed Central

    Mohebbi, Saleh; Andrade, José; Nolte, Lena; Meyer, Heiko; Heisterkamp, Alexander; Majdani, Omid

    2017-01-01

    The present study focuses on the application of scanning laser optical tomography (SLOT) for visualization of anatomical structures inside the human cochlea ex vivo. SLOT is a laser-based highly efficient microscopy technique which allows for tomographic imaging of the internal structure of transparent specimens. Thus, in the field of otology this technique is best convenient for an ex vivo study of the inner ear anatomy. For this purpose, the preparation before imaging comprises decalcification, dehydration as well as optical clearing of the cochlea samples in toto. Here, we demonstrate results of SLOT imaging visualizing hard and soft tissue structures with an optical resolution of down to 15 μm using extinction and autofluorescence as contrast mechanisms. Furthermore, the internal structure can be analyzed nondestructively and quantitatively in detail by sectioning of the three-dimensional datasets. The method of X-ray Micro Computed Tomography (μCT) has been previously applied to explanted cochlea and is solely based on absorption contrast. An advantage of SLOT is that it uses visible light for image formation and thus provides a variety of contrast mechanisms known from other light microscopy techniques, such as fluorescence or scattering. We show that SLOT data is consistent with μCT anatomical data and provides additional information by using fluorescence. We demonstrate that SLOT is applicable for cochlea with metallic cochlear implants (CI) that would lead to significant artifacts in μCT imaging. In conclusion, the present study demonstrates the capability of SLOT for resolution visualization of cleared human cochleae ex vivo using multiple contrast mechanisms and lays the foundation for a broad variety of additional studies. PMID:28873437

  1. The Extended-Image Tracking Technique Based on the Maximum Likelihood Estimation

    NASA Technical Reports Server (NTRS)

    Tsou, Haiping; Yan, Tsun-Yee

    2000-01-01

    This paper describes an extended-image tracking technique based on the maximum likelihood estimation. The target image is assume to have a known profile covering more than one element of a focal plane detector array. It is assumed that the relative position between the imager and the target is changing with time and the received target image has each of its pixels disturbed by an independent additive white Gaussian noise. When a rotation-invariant movement between imager and target is considered, the maximum likelihood based image tracking technique described in this paper is a closed-loop structure capable of providing iterative update of the movement estimate by calculating the loop feedback signals from a weighted correlation between the currently received target image and the previously estimated reference image in the transform domain. The movement estimate is then used to direct the imager to closely follow the moving target. This image tracking technique has many potential applications, including free-space optical communications and astronomy where accurate and stabilized optical pointing is essential.

  2. Visual Sensing for Urban Flood Monitoring

    PubMed Central

    Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han

    2015-01-01

    With the increasing climatic extremes, the frequency and severity of urban flood events have intensified worldwide. In this study, image-based automated monitoring of flood formation and analyses of water level fluctuation were proposed as value-added intelligent sensing applications to turn a passive monitoring camera into a visual sensor. Combined with the proposed visual sensing method, traditional hydrological monitoring cameras have the ability to sense and analyze the local situation of flood events. This can solve the current problem that image-based flood monitoring heavily relies on continuous manned monitoring. Conventional sensing networks can only offer one-dimensional physical parameters measured by gauge sensors, whereas visual sensors can acquire dynamic image information of monitored sites and provide disaster prevention agencies with actual field information for decision-making to relieve flood hazards. The visual sensing method established in this study provides spatiotemporal information that can be used for automated remote analysis for monitoring urban floods. This paper focuses on the determination of flood formation based on image-processing techniques. The experimental results suggest that the visual sensing approach may be a reliable way for determining the water fluctuation and measuring its elevation and flood intrusion with respect to real-world coordinates. The performance of the proposed method has been confirmed; it has the capability to monitor and analyze the flood status, and therefore, it can serve as an active flood warning system. PMID:26287201

  3. An image processing pipeline to detect and segment nuclei in muscle fiber microscopic images.

    PubMed

    Guo, Yanen; Xu, Xiaoyin; Wang, Yuanyuan; Wang, Yaming; Xia, Shunren; Yang, Zhong

    2014-08-01

    Muscle fiber images play an important role in the medical diagnosis and treatment of many muscular diseases. The number of nuclei in skeletal muscle fiber images is a key bio-marker of the diagnosis of muscular dystrophy. In nuclei segmentation one primary challenge is to correctly separate the clustered nuclei. In this article, we developed an image processing pipeline to automatically detect, segment, and analyze nuclei in microscopic image of muscle fibers. The pipeline consists of image pre-processing, identification of isolated nuclei, identification and segmentation of clustered nuclei, and quantitative analysis. Nuclei are initially extracted from background by using local Otsu's threshold. Based on analysis of morphological features of the isolated nuclei, including their areas, compactness, and major axis lengths, a Bayesian network is trained and applied to identify isolated nuclei from clustered nuclei and artifacts in all the images. Then a two-step refined watershed algorithm is applied to segment clustered nuclei. After segmentation, the nuclei can be quantified for statistical analysis. Comparing the segmented results with those of manual analysis and an existing technique, we find that our proposed image processing pipeline achieves good performance with high accuracy and precision. The presented image processing pipeline can therefore help biologists increase their throughput and objectivity in analyzing large numbers of nuclei in muscle fiber images. © 2014 Wiley Periodicals, Inc.

  4. Comprehensive evaluation of an image segmentation technique for measuring tumor volume from CT images

    NASA Astrophysics Data System (ADS)

    Deng, Xiang; Huang, Haibin; Zhu, Lei; Du, Guangwei; Xu, Xiaodong; Sun, Yiyong; Xu, Chenyang; Jolly, Marie-Pierre; Chen, Jiuhong; Xiao, Jie; Merges, Reto; Suehling, Michael; Rinck, Daniel; Song, Lan; Jin, Zhengyu; Jiang, Zhaoxia; Wu, Bin; Wang, Xiaohong; Zhang, Shuai; Peng, Weijun

    2008-03-01

    Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation. In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200 tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can provide complementary information regarding the quality of the segmentation results. The reproducibility was measured by the variation of the volume measurements from 10 independent segmentations. The effect of disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale evaluation of segmentation techniques for other clinical applications.

  5. Efficient random access high resolution region-of-interest (ROI) image retrieval using backward coding of wavelet trees (BCWT)

    NASA Astrophysics Data System (ADS)

    Corona, Enrique; Nutter, Brian; Mitra, Sunanda; Guo, Jiangling; Karp, Tanja

    2008-03-01

    Efficient retrieval of high quality Regions-Of-Interest (ROI) from high resolution medical images is essential for reliable interpretation and accurate diagnosis. Random access to high quality ROI from codestreams is becoming an essential feature in many still image compression applications, particularly in viewing diseased areas from large medical images. This feature is easier to implement in block based codecs because of the inherent spatial independency of the code blocks. This independency implies that the decoding order of the blocks is unimportant as long as the position for each is properly identified. In contrast, wavelet-tree based codecs naturally use some interdependency that exploits the decaying spectrum model of the wavelet coefficients. Thus one must keep track of the decoding order from level to level with such codecs. We have developed an innovative multi-rate image subband coding scheme using "Backward Coding of Wavelet Trees (BCWT)" which is fast, memory efficient, and resolution scalable. It offers far less complexity than many other existing codecs including both, wavelet-tree, and block based algorithms. The ROI feature in BCWT is implemented through a transcoder stage that generates a new BCWT codestream containing only the information associated with the user-defined ROI. This paper presents an efficient technique that locates a particular ROI within the BCWT coded domain, and decodes it back to the spatial domain. This technique allows better access and proper identification of pathologies in high resolution images since only a small fraction of the codestream is required to be transmitted and analyzed.

  6. Frequency Representation: Visualization and Clustering of Acoustic Data Using Self-Organizing Maps.

    PubMed

    Guo, Xinhua; Sun, Song; Yu, Xiantao; Wang, Pan; Nakamura, Kentaro

    2017-11-01

    Extraction and display of frequency information in three-dimensional (3D) acoustic data are important steps to analyze object characteristics, because the characteristics, such as profiles, sizes, surface structures, and material properties, may show frequency dependence. In this study, frequency representation (FR) based on phase information in multispectral acoustic imaging (MSAI) is proposed to overcome the limit of intensity or amplitude information in image display. Experiments are performed on 3D acoustic data collected from a rigid surface engraved with five different letters. The results show that the proposed FR technique can not only identify the depth of the five letters by the colors representing frequency characteristics but also demonstrate the 3D image of the five letters, providing more detailed characteristics that are unavailable by conventional acoustic imaging.

  7. Image quality improvement in cone-beam CT using the super-resolution technique.

    PubMed

    Oyama, Asuka; Kumagai, Shinobu; Arai, Norikazu; Takata, Takeshi; Saikawa, Yusuke; Shiraishi, Kenshiro; Kobayashi, Takenori; Kotoku, Jun'ichi

    2018-04-05

    This study was conducted to improve cone-beam computed tomography (CBCT) image quality using the super-resolution technique, a method of inferring a high-resolution image from a low-resolution image. This technique is used with two matrices, so-called dictionaries, constructed respectively from high-resolution and low-resolution image bases. For this study, a CBCT image, as a low-resolution image, is represented as a linear combination of atoms, the image bases in the low-resolution dictionary. The corresponding super-resolution image was inferred by multiplying the coefficients and the high-resolution dictionary atoms extracted from planning CT images. To evaluate the proposed method, we computed the root mean square error (RMSE) and structural similarity (SSIM). The resulting RMSE and SSIM between the super-resolution images and the planning CT images were, respectively, as much as 0.81 and 1.29 times better than those obtained without using the super-resolution technique. We used super-resolution technique to improve the CBCT image quality.

  8. Ray Tracing with Virtual Objects.

    ERIC Educational Resources Information Center

    Leinoff, Stuart

    1991-01-01

    Introduces the method of ray tracing to analyze the refraction or reflection of real or virtual images from multiple optical devices. Discusses ray-tracing techniques for locating images using convex and concave lenses or mirrors. (MDH)

  9. Scanning electron microscopy combined with image processing technique: Analysis of microstructure, texture and tenderness in Semitendinous and Gluteus Medius bovine muscles.

    PubMed

    Pieniazek, Facundo; Messina, Valeria

    2016-11-01

    In this study the effect of freeze drying on the microstructure, texture, and tenderness of Semitendinous and Gluteus Medius bovine muscles were analyzed applying Scanning Electron Microscopy combined with image analysis. Samples were analyzed by Scanning Electron Microscopy at different magnifications (250, 500, and 1,000×). Texture parameters were analyzed by Texture analyzer and by image analysis. Tenderness by Warner-Bratzler shear force. Significant differences (p < 0.05) were obtained for image and instrumental texture features. A linear trend with a linear correlation was applied for instrumental and image features. Image texture features calculated from Gray Level Co-occurrence Matrix (homogeneity, contrast, entropy, correlation and energy) at 1,000× in both muscles had high correlations with instrumental features (chewiness, hardness, cohesiveness, and springiness). Tenderness showed a positive correlation in both muscles with image features (energy and homogeneity). Combing Scanning Electron Microscopy with image analysis can be a useful tool to analyze quality parameters in meat.Summary SCANNING 38:727-734, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  10. A Monte Carlo simulation study of an improved K-edge log-subtraction X-ray imaging using a photon counting CdTe detector

    NASA Astrophysics Data System (ADS)

    Lee, Youngjin; Lee, Amy Candy; Kim, Hee-Joung

    2016-09-01

    Recently, significant effort has been spent on the development of photons counting detector (PCD) based on a CdTe for applications in X-ray imaging system. The motivation of developing PCDs is higher image quality. Especially, the K-edge subtraction (KES) imaging technique using a PCD is able to improve image quality and useful for increasing the contrast resolution of a target material by utilizing contrast agent. Based on above-mentioned technique, we presented an idea for an improved K-edge log-subtraction (KELS) imaging technique. The KELS imaging technique based on the PCDs can be realized by using different subtraction energy width of the energy window. In this study, the effects of the KELS imaging technique and subtraction energy width of the energy window was investigated with respect to the contrast, standard deviation, and CNR with a Monte Carlo simulation. We simulated the PCD X-ray imaging system based on a CdTe and polymethylmethacrylate (PMMA) phantom which consists of the various iodine contrast agents. To acquired KELS images, images of the phantom using above and below the iodine contrast agent K-edge absorption energy (33.2 keV) have been acquired at different energy range. According to the results, the contrast and standard deviation were decreased, when subtraction energy width of the energy window is increased. Also, the CNR using a KELS imaging technique is higher than that of the images acquired by using whole energy range. Especially, the maximum differences of CNR between whole energy range and KELS images using a 1, 2, and 3 mm diameter iodine contrast agent were acquired 11.33, 8.73, and 8.29 times, respectively. Additionally, the optimum subtraction energy width of the energy window can be acquired at 5, 4, and 3 keV for the 1, 2, and 3 mm diameter iodine contrast agent, respectively. In conclusion, we successfully established an improved KELS imaging technique and optimized subtraction energy width of the energy window, and based on our results, we recommend using this technique for high image quality.

  11. Whole-brain functional magnetic resonance imaging mapping of acute nociceptive responses induced by formalin in rats using atlas registration-based event-related analysis.

    PubMed

    Shih, Yen-Yu I; Chen, You-Yin; Chen, Chiao-Chi V; Chen, Jyh-Cheng; Chang, Chen; Jaw, Fu-Shan

    2008-06-01

    Nociceptive neuronal activation in subcortical regions has not been well investigated in functional magnetic resonance imaging (fMRI) studies. The present report aimed to use the blood oxygenation level-dependent (BOLD) fMRI technique to map nociceptive responses in both subcortical and cortical regions by employing a refined data processing method, the atlas registration-based event-related (ARBER) analysis technique. During fMRI acquisition, 5% formalin (50 mul) was injected into the left hindpaw to induce nociception. ARBER was then used to normalize the data among rats, and images were analyzed using automatic selection of the atlas-based region of interest. It was found that formalin-induced nociceptive processing increased BOLD signals in both cortical and subcortical regions. The cortical activation was distributed over the cingulate, motor, somatosensory, insular, and visual cortices, and the subcortical activation involved the caudate putamen, hippocampus, periaqueductal gray, superior colliculus, thalamus, and hypothalamus. With the aid of ARBER, the present study revealed a detailed activation pattern that possibly indicated the recruitment of various parts of the nociceptive system. The results also demonstrated the utilization of ARBER in establishing an fMRI-based whole-brain nociceptive map. The formalin induced nociceptive images may serve as a template of central nociceptive responses, which can facilitate the future use of fMRI in evaluation of new drugs and preclinical therapies for pain. (c) 2008 Wiley-Liss, Inc.

  12. Demodulation techniques for the amplitude modulated laser imager

    NASA Astrophysics Data System (ADS)

    Mullen, Linda; Laux, Alan; Cochenour, Brandon; Zege, Eleonora P.; Katsev, Iosif L.; Prikhach, Alexander S.

    2007-10-01

    A new technique has been found that uses in-phase and quadrature phase (I/Q) demodulation to optimize the images produced with an amplitude-modulated laser imaging system. An I/Q demodulator was used to collect the I/Q components of the received modulation envelope. It was discovered that by adjusting the local oscillator phase and the modulation frequency, the backscatter and target signals can be analyzed separately via the I/Q components. This new approach enhances image contrast beyond what was achieved with a previous design that processed only the composite magnitude information.

  13. Simulations of multi-contrast x-ray imaging using near-field speckles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zdora, Marie-Christine; Diamond Light Source, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom and Department of Physics & Astronomy, University College London, London, WC1E 6BT; Thibault, Pierre

    2016-01-28

    X-ray dark-field and phase-contrast imaging using near-field speckles is a novel technique that overcomes limitations inherent in conventional absorption x-ray imaging, i.e. poor contrast for features with similar density. Speckle-based imaging yields a wealth of information with a simple setup tolerant to polychromatic and divergent beams, and simple data acquisition and analysis procedures. Here, we present a simulation software used to model the image formation with the speckle-based technique, and we compare simulated results on a phantom sample with experimental synchrotron data. Thorough simulation of a speckle-based imaging experiment will help for better understanding and optimising the technique itself.

  14. Image-Subtraction Photometry of Variable Stars in the Globular Clusters NGC 6388 and NGC 6441

    NASA Technical Reports Server (NTRS)

    Corwin, Michael T.; Sumerel, Andrew N.; Pritzl, Barton J.; Smith, Horace A.; Catelan, M.; Sweigart, Allen V.; Stetson, Peter B.

    2006-01-01

    We have applied Alard's image subtraction method (ISIS v2.1) to the observations of the globular clusters NGC 6388 and NGC 6441 previously analyzed using standard photometric techniques (DAOPHOT, ALLFRAME). In this reanalysis of observations obtained at CTIO, besides recovering the variables previously detected on the basis of our ground-based images, we have also been able to recover most of the RR Lyrae variables previously detected only in the analysis of Hubble Space Telescope WFPC2 observations of the inner region of NGC 6441. In addition, we report five possible new variables not found in the analysis of the EST observations of NGC 6441. This dramatically illustrates the capabilities of image subtraction techniques applied to ground-based data to recover variables in extremely crowded fields. We have also detected twelve new variables and six possible variables in NGC 6388 not found in our previous groundbased studies. Revised mean periods for RRab stars in NGC 6388 and NGC 6441 are 0.676 day and 0.756 day, respectively. These values are among the largest known for any galactic globular cluster. Additional probable type II Cepheids were identified in NGC 6388, confirming its status as a metal-rich globular cluster rich in Cepheids.

  15. Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.

    PubMed

    Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué

    2018-02-15

    We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Combined photoacoustic and magneto-acoustic imaging.

    PubMed

    Qu, Min; Mallidi, Srivalleesha; Mehrmohammadi, Mohammad; Ma, Li Leo; Johnston, Keith P; Sokolov, Konstantin; Emelianov, Stanislav

    2009-01-01

    Ultrasound is a widely used modality with excellent spatial resolution, low cost, portability, reliability and safety. In clinical practice and in the biomedical field, molecular ultrasound-based imaging techniques are desired to visualize tissue pathologies, such as cancer. In this paper, we present an advanced imaging technique - combined photoacoustic and magneto-acoustic imaging - capable of visualizing the anatomical, functional and biomechanical properties of tissues or organs. The experiments to test the combined imaging technique were performed using dual, nanoparticle-based contrast agents that exhibit the desired optical and magnetic properties. The results of our study demonstrate the feasibility of the combined photoacoustic and magneto-acoustic imaging that takes the advantages of each imaging techniques and provides high sensitivity, reliable contrast and good penetrating depth. Therefore, the developed imaging technique can be used in wide range of biomedical and clinical application.

  17. A Synthesis of Star Calibration Techniques for Ground-Based Narrowband Electron-Multiplying Charge-Coupled Device Imagers Used in Auroral Photometry

    NASA Technical Reports Server (NTRS)

    Grubbs, Guy II; Michell, Robert; Samara, Marilia; Hampton, Don; Jahn, Jorg-Micha

    2016-01-01

    A technique is presented for the periodic and systematic calibration of ground-based optical imagers. It is important to have a common system of units (Rayleighs or photon flux) for cross comparison as well as self-comparison over time. With the advancement in technology, the sensitivity of these imagers has improved so that stars can be used for more precise calibration. Background subtraction, flat fielding, star mapping, and other common techniques are combined in deriving a calibration technique appropriate for a variety of ground-based imager installations. Spectral (4278, 5577, and 8446 A ) ground-based imager data with multiple fields of view (19, 47, and 180 deg) are processed and calibrated using the techniques developed. The calibration techniques applied result in intensity measurements in agreement between different imagers using identical spectral filtering, and the intensity at each wavelength observed is within the expected range of auroral measurements. The application of these star calibration techniques, which convert raw imager counts into units of photon flux, makes it possible to do quantitative photometry. The computed photon fluxes, in units of Rayleighs, can be used for the absolute photometry between instruments or as input parameters for auroral electron transport models.

  18. An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model

    PubMed Central

    Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq

    2018-01-01

    For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques. PMID:29694429

  19. An effective content-based image retrieval technique for image visuals representation based on the bag-of-visual-words model.

    PubMed

    Jabeen, Safia; Mehmood, Zahid; Mahmood, Toqeer; Saba, Tanzila; Rehman, Amjad; Mahmood, Muhammad Tariq

    2018-01-01

    For the last three decades, content-based image retrieval (CBIR) has been an active research area, representing a viable solution for retrieving similar images from an image repository. In this article, we propose a novel CBIR technique based on the visual words fusion of speeded-up robust features (SURF) and fast retina keypoint (FREAK) feature descriptors. SURF is a sparse descriptor whereas FREAK is a dense descriptor. Moreover, SURF is a scale and rotation-invariant descriptor that performs better in the case of repeatability, distinctiveness, and robustness. It is robust to noise, detection errors, geometric, and photometric deformations. It also performs better at low illumination within an image as compared to the FREAK descriptor. In contrast, FREAK is a retina-inspired speedy descriptor that performs better for classification-based problems as compared to the SURF descriptor. Experimental results show that the proposed technique based on the visual words fusion of SURF-FREAK descriptors combines the features of both descriptors and resolves the aforementioned issues. The qualitative and quantitative analysis performed on three image collections, namely Corel-1000, Corel-1500, and Caltech-256, shows that proposed technique based on visual words fusion significantly improved the performance of the CBIR as compared to the feature fusion of both descriptors and state-of-the-art image retrieval techniques.

  20. Mechanical analysis and force chain determination in granular materials using digital image correlation.

    PubMed

    Chen, Fanxiu; Zhuang, Qi; Zhang, Huixin

    2016-06-20

    The mechanical behaviors of granular materials are governed by the grain properties and microstructure of the materials. We conducted experiments to study the force transmission in granular materials using plane strain tests. The large amount of nearly continuous displacement data provided by the advanced noncontact experimental technique of digital image correlation (DIC) has provided a means to quantify local displacements and strains at the particle level. The average strain of each particle could be calculated based on the DIC method, and the average stress could be obtained using Hooke's law. The relationship between the stress and particle force could be obtained based on basic Newtonian mechanics and the balance of linear momentum at the particle level. This methodology is introduced and validated. In the testing procedure, the system is tested in real 2D particle cases, and the contact forces and force chain are obtained and analyzed. The system has great potential for analyzing a real granular system and measuring the contact forces and force chain.

  1. Automated detection of microaneurysms using scale-adapted blob analysis and semi-supervised learning.

    PubMed

    Adal, Kedir M; Sidibé, Désiré; Ali, Sharib; Chaum, Edward; Karnowski, Thomas P; Mériaudeau, Fabrice

    2014-04-01

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Research on Wide-field Imaging Technologies for Low-frequency Radio Array

    NASA Astrophysics Data System (ADS)

    Lao, B. Q.; An, T.; Chen, X.; Wu, X. C.; Lu, Y.

    2017-09-01

    Wide-field imaging of low-frequency radio telescopes are subject to a number of difficult problems. One particularly pernicious problem is the non-coplanar baseline effect. It will lead to distortion of the final image when the phase of w direction called w-term is ignored. The image degradation effects are amplified for telescopes with the wide field of view. This paper summarizes and analyzes several w-term correction methods and their technical principles. Their advantages and disadvantages have been analyzed after comparing their computational cost and computational complexity. We conduct simulations with two of these methods, faceting and w-projection, based on the configuration of the first-phase Square Kilometre Array (SKA) low frequency array. The resulted images are also compared with the two-dimensional Fourier transform method. The results show that image quality and correctness derived from both faceting and w-projection are better than the two-dimensional Fourier transform method in wide-field imaging. The image quality and run time affected by the number of facets and w steps have been evaluated. The results indicate that the number of facets and w steps must be reasonable. Finally, we analyze the effect of data size on the run time of faceting and w-projection. The results show that faceting and w-projection need to be optimized before the massive amounts of data processing. The research of the present paper initiates the analysis of wide-field imaging techniques and their application in the existing and future low-frequency array, and fosters the application and promotion to much broader fields.

  3. Ambient Mass Spectrometry Imaging Using Direct Liquid Extraction Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laskin, Julia; Lanekoff, Ingela

    2015-11-13

    Mass spectrometry imaging (MSI) is a powerful analytical technique that enables label-free spatial localization and identification of molecules in complex samples.1-4 MSI applications range from forensics5 to clinical research6 and from understanding microbial communication7-8 to imaging biomolecules in tissues.1, 9-10 Recently, MSI protocols have been reviewed.11 Ambient ionization techniques enable direct analysis of complex samples under atmospheric pressure without special sample pretreatment.3, 12-16 In fact, in ambient ionization mass spectrometry, sample processing (e.g., extraction, dilution, preconcentration, or desorption) occurs during the analysis.17 This substantially speeds up analysis and eliminates any possible effects of sample preparation on the localization of moleculesmore » in the sample.3, 8, 12-14, 18-20 Venter and co-workers have classified ambient ionization techniques into three major categories based on the sample processing steps involved: 1) liquid extraction techniques, in which analyte molecules are removed from the sample and extracted into a solvent prior to ionization; 2) desorption techniques capable of generating free ions directly from substrates; and 3) desorption techniques that produce larger particles subsequently captured by an electrospray plume and ionized.17 This review focuses on localized analysis and ambient imaging of complex samples using a subset of ambient ionization methods broadly defined as “liquid extraction techniques” based on the classification introduced by Venter and co-workers.17 Specifically, we include techniques where analyte molecules are desorbed from solid or liquid samples using charged droplet bombardment, liquid extraction, physisorption, chemisorption, mechanical force, laser ablation, or laser capture microdissection. Analyte extraction is followed by soft ionization that generates ions corresponding to intact species. Some of the key advantages of liquid extraction techniques include the ease of operation, ability to analyze samples in their native environments, speed of analysis, and ability to tune the extraction solvent composition to a problem at hand. For example, solvent composition may be optimized for efficient extraction of different classes of analytes from the sample or for quantification or online derivatization through reactive analysis. In this review, we will: 1) introduce individual liquid extraction techniques capable of localized analysis and imaging, 2) describe approaches for quantitative MSI experiments free of matrix effects, 3) discuss advantages of reactive analysis for MSI experiments, and 4) highlight selected applications (published between 2012 and 2015) that focus on imaging and spatial profiling of molecules in complex biological and environmental samples.« less

  4. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  5. Image inversion analysis of the HST OTA (Hubble Space Telescope Optical Telescope Assembly), phase A

    NASA Technical Reports Server (NTRS)

    Litvak, M. M.

    1991-01-01

    Technical work during September-December 1990 consisted of: (1) analyzing HST point source images obtained from JPL; (2) retrieving phase information from the images by a direct (noniterative) technique; and (3) characterizing the wavefront aberration due to the errors in the Hubble Space Telescope (HST) mirrors, in a preliminary manner. This work was in support of JPL design of compensating optics for the next generation wide-field planetary camera on HST. This digital technique for phase retrieval from pairs of defocused images, is based on the energy transport equation between these image planes. In addition, an end-to-end wave optics routine, based on the JPL Code 5 prescription of the unaberrated HST and WFPC, was derived for output of the reference phase front when mirror error is absent. Also, the Roddier routine unwrapped the retrieved phase by inserting the required jumps of +/- 2(pi) radians for the sake of smoothness. A least-squares fitting routine, insensitive to phase unwrapping, but nonlinear, was used to obtain estimates of the Zernike polynomial coefficients that describe the aberration. The phase results were close to, but higher than, the expected error in conic constant of the primary mirror suggested by the fossil evidence. The analysis of aberration contributed by the camera itself could be responsible for the small discrepancy, but was not verified by analysis.

  6. Inhomogeneity Based Characterization of Distribution Patterns on the Plasma Membrane

    PubMed Central

    Paparelli, Laura; Corthout, Nikky; Wakefield, Devin L.; Sannerud, Ragna; Jovanovic-Talisman, Tijana; Annaert, Wim; Munck, Sebastian

    2016-01-01

    Cell surface protein and lipid molecules are organized in various patterns: randomly, along gradients, or clustered when segregated into discrete micro- and nano-domains. Their distribution is tightly coupled to events such as polarization, endocytosis, and intracellular signaling, but challenging to quantify using traditional techniques. Here we present a novel approach to quantify the distribution of plasma membrane proteins and lipids. This approach describes spatial patterns in degrees of inhomogeneity and incorporates an intensity-based correction to analyze images with a wide range of resolutions; we have termed it Quantitative Analysis of the Spatial distributions in Images using Mosaic segmentation and Dual parameter Optimization in Histograms (QuASIMoDOH). We tested its applicability using simulated microscopy images and images acquired by widefield microscopy, total internal reflection microscopy, structured illumination microscopy, and photoactivated localization microscopy. We validated QuASIMoDOH, successfully quantifying the distribution of protein and lipid molecules detected with several labeling techniques, in different cell model systems. We also used this method to characterize the reorganization of cell surface lipids in response to disrupted endosomal trafficking and to detect dynamic changes in the global and local organization of epidermal growth factor receptors across the cell surface. Our findings demonstrate that QuASIMoDOH can be used to assess protein and lipid patterns, quantifying distribution changes and spatial reorganization at the cell surface. An ImageJ/Fiji plugin of this analysis tool is provided. PMID:27603951

  7. Grid artifact reduction for direct digital radiography detectors based on rotated stationary grids with homomorphic filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dong Sik; Lee, Sanggyun

    2013-06-15

    Purpose: Grid artifacts are caused when using the antiscatter grid in obtaining digital x-ray images. In this paper, research on grid artifact reduction techniques is conducted especially for the direct detectors, which are based on amorphous selenium. Methods: In order to analyze and reduce the grid artifacts, the authors consider a multiplicative grid image model and propose a homomorphic filtering technique. For minimal damage due to filters, which are used to suppress the grid artifacts, rotated grids with respect to the sampling direction are employed, and min-max optimization problems for searching optimal grid frequencies and angles for given sampling frequenciesmore » are established. The authors then propose algorithms for the grid artifact reduction based on the band-stop filters as well as low-pass filters. Results: The proposed algorithms are experimentally tested for digital x-ray images, which are obtained from direct detectors with the rotated grids, and are compared with other algorithms. It is shown that the proposed algorithms can successfully reduce the grid artifacts for direct detectors. Conclusions: By employing the homomorphic filtering technique, the authors can considerably suppress the strong grid artifacts with relatively narrow-bandwidth filters compared to the normal filtering case. Using rotated grids also significantly reduces the ringing artifact. Furthermore, for specific grid frequencies and angles, the authors can use simple homomorphic low-pass filters in the spatial domain, and thus alleviate the grid artifacts with very low implementation complexity.« less

  8. Estimation of Characteristics of Echo Envelope Using RF Echo Signal from the Liver

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tadashi; Hachiya, Hiroyuki; Kamiyama, Naohisa; Ikeda, Kazuki; Moriyasu, Norifumi

    2001-05-01

    To realize quantitative diagnosis of liver cirrhosis, we have been analyzing the probability density function (PDF) of echo amplitude using B-mode images. However, the B-mode image is affected by the various signal and image processing techniques used in the diagnosis equipment, so a detailed and quantitative analysis is very difficult. In this paper, we analyze the PDF of echo amplitude using RF echo signal and B-mode images of normal and cirrhotic livers, and compare both results to examine the validity of the RF echo signal.

  9. Development of image analysis software for quantification of viable cells in microchips.

    PubMed

    Georg, Maximilian; Fernández-Cabada, Tamara; Bourguignon, Natalia; Karp, Paola; Peñaherrera, Ana B; Helguera, Gustavo; Lerner, Betiana; Pérez, Maximiliano S; Mertelsmann, Roland

    2018-01-01

    Over the past few years, image analysis has emerged as a powerful tool for analyzing various cell biology parameters in an unprecedented and highly specific manner. The amount of data that is generated requires automated methods for the processing and analysis of all the resulting information. The software available so far are suitable for the processing of fluorescence and phase contrast images, but often do not provide good results from transmission light microscopy images, due to the intrinsic variation of the acquisition of images technique itself (adjustment of brightness / contrast, for instance) and the variability between image acquisition introduced by operators / equipment. In this contribution, it has been presented an image processing software, Python based image analysis for cell growth (PIACG), that is able to calculate the total area of the well occupied by cells with fusiform and rounded morphology in response to different concentrations of fetal bovine serum in microfluidic chips, from microscopy images in transmission light, in a highly efficient way.

  10. A mobile unit for memory retrieval in daily life based on image and sensor processing

    NASA Astrophysics Data System (ADS)

    Takesumi, Ryuji; Ueda, Yasuhiro; Nakanishi, Hidenobu; Nakamura, Atsuyoshi; Kakimori, Nobuaki

    2003-10-01

    We developed a Mobile Unit which purpose is to support memory retrieval of daily life. In this paper, we describe the two characteristic factors of this unit. (1)The behavior classification with an acceleration sensor. (2)Extracting the difference of environment with image processing technology. In (1), By analyzing power and frequency of an acceleration sensor which turns to gravity direction, the one's activities can be classified using some techniques to walk, stay, and so on. In (2), By extracting the difference between the beginning scene and the ending scene of a stay scene with image processing, the result which is done by user is recognized as the difference of environment. Using those 2 techniques, specific scenes of daily life can be extracted, and important information at the change of scenes can be realized to record. Especially we describe the effect to support retrieving important things, such as a thing left behind and a state of working halfway.

  11. Imaging System Model Crammed Into A 32K Microcomputer

    NASA Astrophysics Data System (ADS)

    Tyson, Robert K.

    1986-12-01

    An imaging system model, based upon linear systems theory, has been developed for a microcomputer with less than 32K of free random access memory (RAM). The model includes diffraction effects of the optics, aberrations in the optics, and atmospheric propagation transfer functions. Variables include pupil geometry, magnitude and character of the aberrations, and strength of atmospheric turbulence ("seeing"). Both coherent and incoherent image formation can be evaluated. The techniques employed for crowding the model into a very small computer will be discussed in detail. Simplifying assumptions for the diffraction and aberration phenomena will be shown along with practical considerations in modeling the optical system. Particular emphasis is placed on avoiding inaccuracies in modeling the pupil and the associated optical transfer function knowing limits on spatial frequency content and resolution. Memory and runtime constraints are analyzed stressing the efficient use of assembly language Fourier transform routines, disk input/output, and graphic displays. The compromises between computer time, limited RAM, and scientific accuracy will be given with techniques for balancing these parameters for individual needs.

  12. Automatic analysis for neuron by confocal laser scanning microscope

    NASA Astrophysics Data System (ADS)

    Satou, Kouhei; Aoki, Yoshimitsu; Mataga, Nobuko; Hensh, Takao K.; Taki, Katuhiko

    2005-12-01

    The aim of this study is to develop a system that recognizes both the macro- and microscopic configurations of nerve cells and automatically performs the necessary 3-D measurements and functional classification of spines. The acquisition of 3-D images of cranial nerves has been enabled by the use of a confocal laser scanning microscope, although the highly accurate 3-D measurements of the microscopic structures of cranial nerves and their classification based on their configurations have not yet been accomplished. In this study, in order to obtain highly accurate measurements of the microscopic structures of cranial nerves, existing positions of spines were predicted by the 2-D image processing of tomographic images. Next, based on the positions that were predicted on the 2-D images, the positions and configurations of the spines were determined more accurately by 3-D image processing of the volume data. We report the successful construction of an automatic analysis system that uses a coarse-to-fine technique to analyze the microscopic structures of cranial nerves with high speed and accuracy by combining 2-D and 3-D image analyses.

  13. Imaging mass spectrometry data reduction: automated feature identification and extraction.

    PubMed

    McDonnell, Liam A; van Remoortere, Alexandra; de Velde, Nico; van Zeijl, René J M; Deelder, André M

    2010-12-01

    Imaging MS now enables the parallel analysis of hundreds of biomolecules, spanning multiple molecular classes, which allows tissues to be described by their molecular content and distribution. When combined with advanced data analysis routines, tissues can be analyzed and classified based solely on their molecular content. Such molecular histology techniques have been used to distinguish regions with differential molecular signatures that could not be distinguished using established histologic tools. However, its potential to provide an independent, complementary analysis of clinical tissues has been limited by the very large file sizes and large number of discrete variables associated with imaging MS experiments. Here we demonstrate data reduction tools, based on automated feature identification and extraction, for peptide, protein, and lipid imaging MS, using multiple imaging MS technologies, that reduce data loads and the number of variables by >100×, and that highlight highly-localized features that can be missed using standard data analysis strategies. It is then demonstrated how these capabilities enable multivariate analysis on large imaging MS datasets spanning multiple tissues. Copyright © 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.

  14. Activity Detection and Retrieval for Image and Video Data with Limited Training

    DTIC Science & Technology

    2015-06-10

    applications. Here we propose two techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a... automata . For our second approach to segmentation, we employ a region based segmentation technique that is capable of handling intensity inhomogeneity...techniques for image segmentation. The first involves an automata based multiple threshold selection scheme, where a mixture of Gaussian is fitted to the

  15. Smart cloud system with image processing server in diagnosing brain diseases dedicated for hospitals with limited resources.

    PubMed

    Fahmi, Fahmi; Nasution, Tigor H; Anggreiny, Anggreiny

    2017-01-01

    The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.

  16. Preliminary Results on the Surface of a New Fe-Based Metallic Material after “In Vivo” Maintaining

    NASA Astrophysics Data System (ADS)

    Săndulache, F.; Stanciu, S.; Cimpoeşu, N.; Stanciu, T.; Cimpoeșu, R.; Enache, A.; Baciu, R.

    2017-06-01

    Abstract A new Fe-based alloy was obtained using UltraCast melting equipment. The alloy, after mechanical processing, was implanted in five rabbit specimens (with respect for the “in-bone” procedure). After 30 days of implantation the samples were recovered and analyzed by weight and surface state meanings. Scanning electron microscopy technique was used to determine the new compounds morphology from the metallic surface and X-ray dispersive energy spectroscopy for chemical analyze results. A bond between the metallic material and biological material of the bone was observed through increasing of sample weight and by SEM images. After the first set of tests, as the samples were extracted and biologically cleaned, the samples were ultrasonically cleaned and re-analyzed in order to establish the stability of the chemical compounds.

  17. Modeling ECM fiber formation: structure information extracted by analysis of 2D and 3D image sets

    NASA Astrophysics Data System (ADS)

    Wu, Jun; Voytik-Harbin, Sherry L.; Filmer, David L.; Hoffman, Christoph M.; Yuan, Bo; Chiang, Ching-Shoei; Sturgis, Jennis; Robinson, Joseph P.

    2002-05-01

    Recent evidence supports the notion that biological functions of extracellular matrix (ECM) are highly correlated to its structure. Understanding this fibrous structure is very crucial in tissue engineering to develop the next generation of biomaterials for restoration of tissues and organs. In this paper, we integrate confocal microscopy imaging and image-processing techniques to analyze the structural properties of ECM. We describe a 2D fiber middle-line tracing algorithm and apply it via Euclidean distance maps (EDM) to extract accurate fibrous structure information, such as fiber diameter, length, orientation, and density, from single slices. Based on a 2D tracing algorithm, we extend our analysis to 3D tracing via Euclidean distance maps to extract 3D fibrous structure information. We use computer simulation to construct the 3D fibrous structure which is subsequently used to test our tracing algorithms. After further image processing, these models are then applied to a variety of ECM constructions from which results of 2D and 3D traces are statistically analyzed.

  18. (abstract) Topographic Signatures in Geology

    NASA Technical Reports Server (NTRS)

    Farr, Tom G.; Evans, Diane L.

    1996-01-01

    Topographic information is required for many Earth Science investigations. For example, topography is an important element in regional and global geomorphic studies because it reflects the interplay between the climate-driven processes of erosion and the tectonic processes of uplift. A number of techniques have been developed to analyze digital topographic data, including Fourier texture analysis. A Fourier transform of the topography of an area allows the spatial frequency content of the topography to be analyzed. Band-pass filtering of the transform produces images representing the amplitude of different spatial wavelengths. These are then used in a multi-band classification to map units based on their spatial frequency content. The results using a radar image instead of digital topography showed good correspondence to a geologic map, however brightness variations in the image unrelated to topography caused errors. An additional benefit to the use of Fourier band-pass images for the classification is that the textural signatures of the units are quantative measures of the spatial characteristics of the units that may be used to map similar units in similar environments.

  19. SU-F-P-48: The Quantitative Evaluation and Comparison of Image Distortion and Loss of X-Ray Images Between Anti-Scattered Grid and Moire Compensation Processing in Digital Radiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, W; Jung, J; Kang, Y

    Purpose: To quantitatively analyze the influence image processing for Moire elimination has in digital radiography by comparing the image acquired from optimized anti-scattered grid only and the image acquired from software processing paired with misaligned low-frequency grid. Methods: Special phantom, which does not create scattered radiation, was used to acquire non-grid reference images and they were acquired without any grids. A set of images was acquired with optimized grid, aligned to pixel of a detector and other set of images was acquired with misaligned low-frequency grid paired with Moire elimination processing algorithm. X-ray technique used was based on consideration tomore » Bucky factor derived from non-grid reference images. For evaluation, we analyze by comparing pixel intensity of acquired images with grids to that of reference images. Results: When compared to image acquired with optimized grid, images acquired with Moire elimination processing algorithm showed 10 to 50% lower mean contrast value of ROI. Severe distortion of images was found with when the object’s thickness was measured at 7 or less pixels. In this case, contrast value measured from images acquired with Moire elimination processing algorithm was under 30% of that taken from reference image. Conclusion: This study shows the potential risk of Moire compensation images in diagnosis. Images acquired with misaligned low-frequency grid results in Moire noise and Moire compensation processing algorithm used to remove this Moire noise actually caused an image distortion. As a result, fractures and/or calcifications which are presented in few pixels only may not be diagnosed properly. In future work, we plan to evaluate the images acquired without grid but based on 100% image processing and the potential risks it possesses.« less

  20. Microtubules in Plant Cells: Strategies and Methods for Immunofluorescence, Transmission Electron Microscopy and Live Cell Imaging

    PubMed Central

    Celler, Katherine; Fujita, Miki; Kawamura, Eiko; Ambrose, Chris; Herburger, Klaus; Wasteneys, Geoffrey O.

    2016-01-01

    Microtubules are required throughout plant development for a wide variety of processes, and different strategies have evolved to visualize and analyze them. This chapter provides specific methods that can be used to analyze microtubule organization and dynamic properties in plant systems and summarizes the advantages and limitations for each technique. We outline basic methods for preparing samples for immunofluorescence labelling, including an enzyme-based permeabilization method, and a freeze-shattering method, which generates microfractures in the cell wall to provide antibodies access to cells in cuticle-laden aerial organs such as leaves. We discuss current options for live cell imaging of MTs with fluorescently tagged proteins (FPs), and provide chemical fixation, high pressure freezing/freeze substitution, and post-fixation staining protocols for preserving MTs for transmission electron microscopy and tomography. PMID:26498784

  1. White Light Used to Enable Enhanced Surface Topography, Geometry, and Wear Characterization of Oil-Free Bearings

    NASA Technical Reports Server (NTRS)

    Lucero, John M.

    2003-01-01

    A new optically based measuring capability that characterizes surface topography, geometry, and wear has been employed by NASA Glenn Research Center s Tribology and Surface Science Branch. To characterize complex parts in more detail, we are using a three-dimensional, surface structure analyzer-the NewView5000 manufactured by Zygo Corporation (Middlefield, CT). This system provides graphical images and high-resolution numerical analyses to accurately characterize surfaces. Because of the inherent complexity of the various analyzed assemblies, the machine has been pushed to its limits. For example, special hardware fixtures and measuring techniques were developed to characterize Oil- Free thrust bearings specifically. We performed a more detailed wear analysis using scanning white light interferometry to image and measure the bearing structure and topography, enabling a further understanding of bearing failure causes.

  2. Development of an acquisition protocol and a segmentation algortihm for wounds of cutaneous Leishmaniasis in digital images

    NASA Astrophysics Data System (ADS)

    Diaz, Kristians; Castañeda, Benjamín; Miranda, César; Lavarello, Roberto; Llanos, Alejandro

    2010-03-01

    We developed a protocol for the acquisition of digital images and an algorithm for a color-based automatic segmentation of cutaneous lesions of Leishmaniasis. The protocol for image acquisition provides control over the working environment to manipulate brightness, lighting and undesirable shadows on the injury using indirect lighting. Also, this protocol was used to accurately calculate the area of the lesion expressed in mm2 even in curved surfaces by combining the information from two consecutive images. Different color spaces were analyzed and compared using ROC curves in order to determine the color layer with the highest contrast between the background and the wound. The proposed algorithm is composed of three stages: (1) Location of the wound determined by threshold and mathematical morphology techniques to the H layer of the HSV color space, (2) Determination of the boundaries of the wound by analyzing the color characteristics in the YIQ space based on masks (for the wound and the background) estimated from the first stage, and (3) Refinement of the calculations obtained on the previous stages by using the discrete dynamic contours algorithm. The segmented regions obtained with the algorithm were compared with manual segmentations made by a medical specialist. Broadly speaking, our results support that color provides useful information during segmentation and measurement of wounds of cutaneous Leishmaniasis. Results from ten images showed 99% specificity, 89% sensitivity, and 98% accuracy.

  3. MARS spectral molecular imaging of lamb tissue: data collection and image analysis

    NASA Astrophysics Data System (ADS)

    Aamir, R.; Chernoglazov, A.; Bateman, C. J.; Butler, A. P. H.; Butler, P. H.; Anderson, N. G.; Bell, S. T.; Panta, R. K.; Healy, J. L.; Mohr, J. L.; Rajendran, K.; Walsh, M. F.; de Ruiter, N.; Gieseg, S. P.; Woodfield, T.; Renaud, P. F.; Brooke, L.; Abdul-Majid, S.; Clyne, M.; Glendenning, R.; Bones, P. J.; Billinghurst, M.; Bartneck, C.; Mandalika, H.; Grasset, R.; Schleich, N.; Scott, N.; Nik, S. J.; Opie, A.; Janmale, T.; Tang, D. N.; Kim, D.; Doesburg, R. M.; Zainon, R.; Ronaldson, J. P.; Cook, N. J.; Smithies, D. J.; Hodge, K.

    2014-02-01

    Spectral molecular imaging is a new imaging technique able to discriminate and quantify different components of tissue simultaneously at high spatial and high energy resolution. Our MARS scanner is an x-ray based small animal CT system designed to be used in the diagnostic energy range (20-140 keV). In this paper, we demonstrate the use of the MARS scanner, equipped with the Medipix3RX spectroscopic photon-processing detector, to discriminate fat, calcium, and water in tissue. We present data collected from a sample of lamb meat including bone as an illustrative example of human tissue imaging. The data is analyzed using our 3D Algebraic Reconstruction Algorithm (MARS-ART) and by material decomposition based on a constrained linear least squares algorithm. The results presented here clearly show the quantification of lipid-like, water-like and bone-like components of tissue. However, it is also clear to us that better algorithms could extract more information of clinical interest from our data. Because we are one of the first to present data from multi-energy photon-processing small animal CT systems, we make the raw, partial and fully processed data available with the intention that others can analyze it using their familiar routines. The raw, partially processed and fully processed data of lamb tissue along with the phantom calibration data can be found at http://hdl.handle.net/10092/8531.

  4. Skin Parameter Map Retrieval from a Dedicated Multispectral Imaging System Applied to Dermatology/Cosmetology

    PubMed Central

    2013-01-01

    In vivo quantitative assessment of skin lesions is an important step in the evaluation of skin condition. An objective measurement device can help as a valuable tool for skin analysis. We propose an explorative new multispectral camera specifically developed for dermatology/cosmetology applications. The multispectral imaging system provides images of skin reflectance at different wavebands covering visible and near-infrared domain. It is coupled with a neural network-based algorithm for the reconstruction of reflectance cube of cutaneous data. This cube contains only skin optical reflectance spectrum in each pixel of the bidimensional spatial information. The reflectance cube is analyzed by an algorithm based on a Kubelka-Munk model combined with evolutionary algorithm. The technique allows quantitative measure of cutaneous tissue and retrieves five skin parameter maps: melanin concentration, epidermis/dermis thickness, haemoglobin concentration, and the oxygenated hemoglobin. The results retrieved on healthy participants by the algorithm are in good accordance with the data from the literature. The usefulness of the developed technique was proved during two experiments: a clinical study based on vitiligo and melasma skin lesions and a skin oxygenation experiment (induced ischemia) with healthy participant where normal tissues are recorded at normal state and when temporary ischemia is induced. PMID:24159326

  5. Spatial and temporal single-cell volume estimation by a fluorescence imaging technique with application to astrocytes in primary culture

    NASA Astrophysics Data System (ADS)

    Khatibi, Siamak; Allansson, Louise; Gustavsson, Tomas; Blomstrand, Fredrik; Hansson, Elisabeth; Olsson, Torsten

    1999-05-01

    Cell volume changes are often associated with important physiological and pathological processes in the cell. These changes may be the means by which the cell interacts with its surrounding. Astroglial cells change their volume and shape under several circumstances that affect the central nervous system. Following an incidence of brain damage, such as a stroke or a traumatic brain injury, one of the first events seen is swelling of the astroglial cells. In order to study this and other similar phenomena, it is desirable to develop technical instrumentation and analysis methods capable of detecting and characterizing dynamic cell shape changes in a quantitative and robust way. We have developed a technique to monitor and to quantify the spatial and temporal volume changes in a single cell in primary culture. The technique is based on two- and three-dimensional fluorescence imaging. The temporal information is obtained from a sequence of microscope images, which are analyzed in real time. The spatial data is collected in a sequence of images from the microscope, which is automatically focused up and down through the specimen. The analysis of spatial data is performed off-line and consists of photobleaching compensation, focus restoration, filtering, segmentation and spatial volume estimation.

  6. Synthetic aperture radar image formation for the moving-target and near-field bistatic cases

    NASA Astrophysics Data System (ADS)

    Ding, Yu

    This dissertation addresses topics in two areas of synthetic aperture radar (SAR) image formation: time-frequency based SAR imaging of moving targets and a fast backprojection (BP) algorithm for near-field bistatic SAR imaging. SAR imaging of a moving target is a challenging task due to unknown motion of the target. We approach this problem in a theoretical way, by analyzing the Wigner-Ville distribution (WVD) based SAR imaging technique. We derive approximate closed-form expressions for the point-target response of the SAR imaging system, which quantify the image resolution, and show how the blurring in conventional SAR imaging can be eliminated, while the target shift still remains. Our analyses lead to accurate prediction of the target position in the reconstructed images. The derived expressions also enable us to further study additional aspects of WVD-based SAR imaging. Bistatic SAR imaging is more involved than the monostatic SAR case, because of the separation of the transmitter and the receiver, and possibly the changing bistatic geometry. For near-field bistatic SAR imaging, we develop a novel fast BP algorithm, motivated by a newly proposed fast BP algorithm in computer tomography. First we show that the BP algorithm is the spatial-domain counterpart of the benchmark o -- k algorithm in bistatic SAR imaging, yet it avoids the frequency-domain interpolation in the o -- k algorithm, which may cause artifacts in the reconstructed image. We then derive the band-limited property for BP methods in both monostatic and bistatic SAR imaging, which is the basis for developing the fast BP algorithm. We compare our algorithm with other frequency-domain based algorithms, and show that it achieves better reconstructed image quality, while having the same computational complexity as that of the frequency-domain based algorithms.

  7. [Application of 3S techniques in ecological landscape planning of Harbin suburb].

    PubMed

    Fan, Wenyi; Gong, Wenfeng; Liu, Dandan; Zhou, Hongze; Zhu, Ning

    2005-12-01

    With the image data (SPOT), soil utilization map (1:50000) and other related materials of Harbin, and under the support of GIS, RS and GPS techniques, this paper obtained the landscape pattern of Harbin suburb and the Digital Elevation Model (DEM) of Harbin. Indices including mean patch area, landscape dominance, mean slope, mean altitude, and fragmentation degree were selected and synthetically analyzed, with the ecological landscape planning made by DEM model. The results showed that 3S techniques could help to decide typical landscape types. The landscape type database was established, and the landscape type thematic map was generated, with land use status and landscape distribution, physiognomy, and land use types combined. The ecological landscape planning was described in large scale with the image data and DEM combined, and the landscape structure of Harbin suburb was reflected directly with the ecological landscape planning and DEM combined, which improved the ecological function in this region, and provided scientific bases to the healthy development in urban-rural integration area.

  8. Measurement of glucose concentration by image processing of thin film slides

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Heavner, David

    2012-02-01

    Measurement of glucose concentration is important for diagnosis and treatment of diabetes mellitus and other medical conditions. This paper describes a novel image-processing based approach for measuring glucose concentration. A fluid drop (patient sample) is placed on a thin film slide. Glucose, present in the sample, reacts with reagents on the slide to produce a color dye. The color intensity of the dye formed varies with glucose at different concentration levels. Current methods use spectrophotometry to determine the glucose level of the sample. Our proposed algorithm uses an image of the slide, captured at a specific wavelength, to automatically determine glucose concentration. The algorithm consists of two phases: training and testing. Training datasets consist of images at different concentration levels. The dye-occupied image region is first segmented using a Hough based technique and then an intensity based feature is calculated from the segmented region. Subsequently, a mathematical model that describes a relationship between the generated feature values and the given concentrations is obtained. During testing, the dye region of a test slide image is segmented followed by feature extraction. These two initial steps are similar to those done in training. However, in the final step, the algorithm uses the model (feature vs. concentration) obtained from the training and feature generated from test image to predict the unknown concentration. The performance of the image-based analysis was compared with that of a standard glucose analyzer.

  9. Blood vessel segmentation algorithms - Review of methods, datasets and evaluation metrics.

    PubMed

    Moccia, Sara; De Momi, Elena; El Hadji, Sara; Mattos, Leonardo S

    2018-05-01

    Blood vessel segmentation is a topic of high interest in medical image analysis since the analysis of vessels is crucial for diagnosis, treatment planning and execution, and evaluation of clinical outcomes in different fields, including laryngology, neurosurgery and ophthalmology. Automatic or semi-automatic vessel segmentation can support clinicians in performing these tasks. Different medical imaging techniques are currently used in clinical practice and an appropriate choice of the segmentation algorithm is mandatory to deal with the adopted imaging technique characteristics (e.g. resolution, noise and vessel contrast). This paper aims at reviewing the most recent and innovative blood vessel segmentation algorithms. Among the algorithms and approaches considered, we deeply investigated the most novel blood vessel segmentation including machine learning, deformable model, and tracking-based approaches. This paper analyzes more than 100 articles focused on blood vessel segmentation methods. For each analyzed approach, summary tables are presented reporting imaging technique used, anatomical region and performance measures employed. Benefits and disadvantages of each method are highlighted. Despite the constant progress and efforts addressed in the field, several issues still need to be overcome. A relevant limitation consists in the segmentation of pathological vessels. Unfortunately, not consistent research effort has been addressed to this issue yet. Research is needed since some of the main assumptions made for healthy vessels (such as linearity and circular cross-section) do not hold in pathological tissues, which on the other hand require new vessel model formulations. Moreover, image intensity drops, noise and low contrast still represent an important obstacle for the achievement of a high-quality enhancement. This is particularly true for optical imaging, where the image quality is usually lower in terms of noise and contrast with respect to magnetic resonance and computer tomography angiography. No single segmentation approach is suitable for all the different anatomical region or imaging modalities, thus the primary goal of this review was to provide an up to date source of information about the state of the art of the vessel segmentation algorithms so that the most suitable methods can be chosen according to the specific task. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Imaging of Endogenous Metabolites of Plant Leaves by Mass Spectrometry Based on Laser Activated Electron Tunneling.

    PubMed

    Huang, Lulu; Tang, Xuemei; Zhang, Wenyang; Jiang, Ruowei; Chen, Disong; Zhang, Juan; Zhong, Hongying

    2016-04-07

    A new mass spectrometric imaging approach based on laser activated electron tunneling (LAET) was described and applied to analysis of endogenous metabolites of plant leaves. LAET is an electron-directed soft ionization technique. Compressed thin films of semiconductor nanoparticles of bismuth cobalt zinc oxide were placed on the sample plate for proof-of-principle demonstration because they can not only absorb ultraviolet laser but also have high electron mobility. Upon laser irradiation, electrons are excited from valence bands to conduction bands. With appropriate kinetic energies, photoexcited electrons can tunnel away from the barrier and eventually be captured by charge deficient atoms present in neutral molecules. Resultant unpaired electron subsequently initiates specific chemical bond cleavage and generates ions that can be detected in negative ion mode of the mass spectrometer. LAET avoids the co-crystallization process of routinely used organic matrix materials with analyzes in MALDI (matrix assisted-laser desorption ionization) analysis. Thus uneven distribution of crystals with different sizes and shapes as well as background peaks in the low mass range resulting from matrix molecules is eliminated. Advantages of LAET imaging technique include not only improved spatial resolution but also photoelectron capture dissociation which produces predictable fragment ions.

  11. Advanced Ecosystem Mapping Techniques for Large Arctic Study Domains Using Calibrated High-Resolution Imagery

    NASA Astrophysics Data System (ADS)

    Macander, M. J.; Frost, G. V., Jr.

    2015-12-01

    Regional-scale mapping of vegetation and other ecosystem properties has traditionally relied on medium-resolution remote sensing such as Landsat (30 m) and MODIS (250 m). Yet, the burgeoning availability of high-resolution (<=2 m) imagery and ongoing advances in computing power and analysis tools raises the prospect of performing ecosystem mapping at fine spatial scales over large study domains. Here we demonstrate cutting-edge mapping approaches over a ~35,000 km² study area on Alaska's North Slope using calibrated and atmospherically-corrected mosaics of high-resolution WorldView-2 and GeoEye-1 imagery: (1) an a priori spectral approach incorporating the Satellite Imagery Automatic Mapper (SIAM) algorithms; (2) image segmentation techniques; and (3) texture metrics. The SIAM spectral approach classifies radiometrically-calibrated imagery to general vegetation density categories and non-vegetated classes. The SIAM classes were developed globally and their applicability in arctic tundra environments has not been previously evaluated. Image segmentation, or object-based image analysis, automatically partitions high-resolution imagery into homogeneous image regions that can then be analyzed based on spectral, textural, and contextual information. We applied eCognition software to delineate waterbodies and vegetation classes, in combination with other techniques. Texture metrics were evaluated to determine the feasibility of using high-resolution imagery to algorithmically characterize periglacial surface forms (e.g., ice-wedge polygons), which are an important physical characteristic of permafrost-dominated regions but which cannot be distinguished by medium-resolution remote sensing. These advanced mapping techniques provide products which can provide essential information supporting a broad range of ecosystem science and land-use planning applications in northern Alaska and elsewhere in the circumpolar Arctic.

  12. 35-45 Giga Hertz Transceiver System for Phase and Magnitude Detection

    NASA Technical Reports Server (NTRS)

    Beni, Aman Aflaki

    2007-01-01

    Nondestructive evaluation (NDE) is the science and practice of examining an object in a way that the object's usefulness is not adversely affected. Different types of NDE methods exist but this thesis is based on microwave and millimeter wave NDE using imaging techniques. Microwave NDE is based on illuminating the object under test with a microwave signal and studying the various properties of the reflected signal from the object. This reflected signal contains some information about the inner structure of the object under test. This information may be contained in several parameters including the phase and magnitude of the reflected signal. The goal of this project is to design and build a Q-band coherent transceiver that is capable of measuring the reflected signal's phase and magnitude so that an image of the object under test may be reconstructed. From the several techniques that can be used to construct an image of the object under test, techniques of interest to this work include synthetic aperture focusing technique (SAFT) and microwave holography. The transceiver system should have the ability to sweep a large portion of Q-band frequency range in small frequency steps as quick as possible while the detected phase and magnitude of the reflected signal is very accurate. Several different designs were studied and the final schematic diagram of the transceiver system was determined. One of the most important modules that was designed, implemented and tested in the laboratory was an accurate phase/magnitude detector circuit. The compared results of the scans using the transceiver system and vector network analyzer (VNA) showed that this transceiver system has a great potential to replace a VNA for the purpose of microwave and millimeter wave imaging.

  13. Human-machine interface for a VR-based medical imaging environment

    NASA Astrophysics Data System (ADS)

    Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans

    1997-05-01

    Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.

  14. Multimodal tissue perfusion imaging using multi-spectral and thermographic imaging systems applied on clinical data

    NASA Astrophysics Data System (ADS)

    Klaessens, John H. G. M.; Nelisse, Martin; Verdaasdonk, Rudolf M.; Noordmans, Herke Jan

    2013-03-01

    Clinical interventions can cause changes in tissue perfusion, oxygenation or temperature. Real-time imaging of these phenomena could be useful for surgical strategy or understanding of physiological regulation mechanisms. Two noncontact imaging techniques were applied for imaging of large tissue areas: LED based multispectral imaging (MSI, 17 different wavelengths 370 nm-880 nm) and thermal imaging (7.5 to 13.5 μm). Oxygenation concentration changes were calculated using different analyzing methods. The advantages of these methods are presented for stationary and dynamic applications. Concentration calculations of chromophores in tissue require right choices of wavelengths The effects of different wavelength choices for hemoglobin concentration calculations were studied in laboratory conditions and consequently applied in clinical studies. Corrections for interferences during the clinical registrations (ambient light fluctuations, tissue movements) were performed. The wavelength dependency of the algorithms were studied and wavelength sets with the best results will be presented. The multispectral and thermal imaging systems were applied during clinical intervention studies: reperfusion of tissue flap transplantation (ENT), effectiveness of local anesthetic block and during open brain surgery in patients with epileptic seizures. The LED multispectral imaging system successfully imaged the perfusion and oxygenation changes during clinical interventions. The thermal images show local heat distributions over tissue areas as a result of changes in tissue perfusion. Multispectral imaging and thermal imaging provide complementary information and are promising techniques for real-time diagnostics of physiological processes in medicine.

  15. QR code optical encryption using spatially incoherent illumination

    NASA Astrophysics Data System (ADS)

    Cheremkhin, P. A.; Krasnov, V. V.; Rodin, V. G.; Starikov, R. S.

    2017-02-01

    Optical encryption is an actively developing field of science. The majority of encryption techniques use coherent illumination and suffer from speckle noise, which severely limits their applicability. The spatially incoherent encryption technique does not have this drawback, but its effectiveness is dependent on the Fourier spectrum properties of the image to be encrypted. The application of a quick response (QR) code in the capacity of a data container solves this problem, and the embedded error correction code also enables errorless decryption. The optical encryption of digital information in the form of QR codes using spatially incoherent illumination was implemented experimentally. The encryption is based on the optical convolution of the image to be encrypted with the kinoform point spread function, which serves as an encryption key. Two liquid crystal spatial light modulators were used in the experimental setup for the QR code and the kinoform imaging, respectively. The quality of the encryption and decryption was analyzed in relation to the QR code size. Decryption was conducted digitally. The successful decryption of encrypted QR codes of up to 129  ×  129 pixels was demonstrated. A comparison with the coherent QR code encryption technique showed that the proposed technique has a signal-to-noise ratio that is at least two times higher.

  16. Toward in situ x-ray diffraction imaging at the nanometer scale

    NASA Astrophysics Data System (ADS)

    Zatsepin, Nadia A.; Dilanian, Ruben A.; Nikulin, Andrei Y.; Gable, Brian M.; Muddle, Barry C.; Sakata, Osami

    2008-08-01

    We present the results of preliminary investigations determining the sensitivity and applicability of a novel x-ray diffraction based nanoscale imaging technique, including simulations and experiments. The ultimate aim of this nascent technique is non-destructive, bulk-material characterization on the nanometer scale, involving three dimensional image reconstructions of embedded nanoparticles and in situ sample characterization. The approach is insensitive to x-ray coherence, making it applicable to synchrotron and laboratory hard x-ray sources, opening the possibility of unprecedented nanometer resolution with the latter. The technique is being developed with a focus on analyzing a technologically important light metal alloy, Al-xCu (where x is 2.0-5.0 %wt). The mono- and polycrystalline samples contain crystallographically oriented, weakly diffracting Al2Cu nanoprecipitates in a sparse, spatially random dispersion within the Al matrix. By employing a triple-axis diffractometer in the non-dispersive setup we collected two-dimensional reciprocal space maps of synchrotron x-rays diffracted from the Al2Cu nanoparticles. The intensity profiles of the diffraction peaks confirmed the sensitivity of the technique to the presence and orientation of the nanoparticles. This is a fundamental step towards in situ observation of such extremely sparse, weakly diffracting nanoprecipitates embedded in light metal alloys at early stages of their growth.

  17. Image encryption technique based on new two-dimensional fractional-order discrete chaotic map and Menezes–Vanstone elliptic curve cryptosystem

    NASA Astrophysics Data System (ADS)

    Liu, Zeyu; Xia, Tiecheng; Wang, Jinbo

    2018-03-01

    We propose a new fractional two-dimensional triangle function combination discrete chaotic map (2D-TFCDM) with the discrete fractional difference. Moreover, the chaos behaviors of the proposed map are observed and the bifurcation diagrams, the largest Lyapunov exponent plot, and the phase portraits are derived, respectively. Finally, with the secret keys generated by Menezes–Vanstone elliptic curve cryptosystem, we apply the discrete fractional map into color image encryption. After that, the image encryption algorithm is analyzed in four aspects and the result indicates that the proposed algorithm is more superior than the other algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61072147 and 11271008).

  18. Early Breast Cancer Detection by Ultrawide Band Imaging with Dispersion Consideration

    NASA Astrophysics Data System (ADS)

    Xiao, Xia; Kikkawa, Takamaro

    2008-04-01

    Ultrawide band (UWB) microwave imaging is a promising method for early-stage breast cancer detection based on the large contrast of electric parameters between the tumor and the normal breast tissue. The tumor can be detected by analyzing the reflection and scattering behavior of the UWB microwave propagating in the breast. In this study, the tumor location is determined by comparing the waveforms resulted from the tumor-containing and tumor-free breasts. The frequency dispersive characteristics of the fatty breast tissue, skin and tumor are considered in the study to approach the actual electrical properties of the breast. The correct location and size are visualized for an early-stage tumor embedded in the breast using the principle of a confocal microwave imaging technique.

  19. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

    PubMed

    Khan, Khan Bahadar; Khaliq, Amir A; Jalil, Abdul; Shahid, Muhammad

    2018-01-01

    The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR) and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM) is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.

  20. Textured digital elevation model formation from low-cost UAV LADAR/digital image data

    NASA Astrophysics Data System (ADS)

    Bybee, Taylor C.; Budge, Scott E.

    2015-05-01

    Textured digital elevation models (TDEMs) have valuable use in precision agriculture, situational awareness, and disaster response. However, scientific-quality models are expensive to obtain using conventional aircraft-based methods. The cost of creating an accurate textured terrain model can be reduced by using a low-cost (<$20k) UAV system fitted with ladar and electro-optical (EO) sensors. A texel camera fuses calibrated ladar and EO data upon simultaneous capture, creating a texel image. This eliminates the problem of fusing the data in a post-processing step and enables both 2D- and 3D-image registration techniques to be used. This paper describes formation of TDEMs using simulated data from a small UAV gathering swaths of texel images of the terrain below. Being a low-cost UAV, only a coarse knowledge of position and attitude is known, and thus both 2D- and 3D-image registration techniques must be used to register adjacent swaths of texel imagery to create a TDEM. The process of creating an aggregate texel image (a TDEM) from many smaller texel image swaths is described. The algorithm is seeded with the rough estimate of position and attitude of each capture. Details such as the required amount of texel image overlap, registration models, simulated flight patterns (level and turbulent), and texture image formation are presented. In addition, examples of such TDEMs are shown and analyzed for accuracy.

  1. Label fusion based brain MR image segmentation via a latent selective model

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu

    2018-04-01

    Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.

  2. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  3. Combining deep learning and coherent anti-Stokes Raman scattering imaging for automated differential diagnosis of lung cancer.

    PubMed

    Weng, Sheng; Xu, Xiaoyun; Li, Jiasong; Wong, Stephen T C

    2017-10-01

    Lung cancer is the most prevalent type of cancer and the leading cause of cancer-related deaths worldwide. Coherent anti-Stokes Raman scattering (CARS) is capable of providing cellular-level images and resolving pathologically related features on human lung tissues. However, conventional means of analyzing CARS images requires extensive image processing, feature engineering, and human intervention. This study demonstrates the feasibility of applying a deep learning algorithm to automatically differentiate normal and cancerous lung tissue images acquired by CARS. We leverage the features learned by pretrained deep neural networks and retrain the model using CARS images as the input. We achieve 89.2% accuracy in classifying normal, small-cell carcinoma, adenocarcinoma, and squamous cell carcinoma lung images. This computational method is a step toward on-the-spot diagnosis of lung cancer and can be further strengthened by the efforts aimed at miniaturizing the CARS technique for fiber-based microendoscopic imaging. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  4. Introduction to the GEOBIA 2010 special issue: From pixels to geographic objects in remote sensing image analysis

    NASA Astrophysics Data System (ADS)

    Addink, Elisabeth A.; Van Coillie, Frieke M. B.; De Jong, Steven M.

    2012-04-01

    Traditional image analysis methods are mostly pixel-based and use the spectral differences of landscape elements at the Earth surface to classify these elements or to extract element properties from the Earth Observation image. Geographic object-based image analysis (GEOBIA) has received considerable attention over the past 15 years for analyzing and interpreting remote sensing imagery. In contrast to traditional image analysis, GEOBIA works more like the human eye-brain combination does. The latter uses the object's color (spectral information), size, texture, shape and occurrence to other image objects to interpret and analyze what we see. GEOBIA starts by segmenting the image grouping together pixels into objects and next uses a wide range of object properties to classify the objects or to extract object's properties from the image. Significant advances and improvements in image analysis and interpretation are made thanks to GEOBIA. In June 2010 the third conference on GEOBIA took place at the Ghent University after successful previous meetings in Calgary (2008) and Salzburg (2006). This special issue presents a selection of the 2010 conference papers that are worked out as full research papers for JAG. The papers cover GEOBIA applications as well as innovative methods and techniques. The topics range from vegetation mapping, forest parameter estimation, tree crown identification, urban mapping, land cover change, feature selection methods and the effects of image compression on segmentation. From the original 94 conference papers, 26 full research manuscripts were submitted; nine papers were selected and are presented in this special issue. Selection was done on the basis of quality and topic of the studies. The next GEOBIA conference will take place in Rio de Janeiro from 7 to 9 May 2012 where we hope to welcome even more scientists working in the field of GEOBIA.

  5. Assessment of Molecular Acoustic Angiography for Combined Microvascular and Molecular Imaging in Preclinical Tumor Models

    PubMed Central

    Lindsey, Brooks D.; Shelton, Sarah E.; Foster, F. Stuart; Dayton, Paul A.

    2017-01-01

    Purpose To evaluate a new ultrasound molecular imaging approach in its ability to image a preclinical tumor model and to investigate the capacity to visualize and quantify co-registered microvascular and molecular imaging volumes. Procedures Molecular imaging using the new technique was compared with a conventional ultrasound molecular imaging technique (multi-pulse imaging) by varying the injected microbubble dose and scanning each animal using both techniques. Each of the 14 animals was randomly assigned one of three doses; bolus dose was varied, and the animals were imaged for three consecutive days so that each animal received every dose. A microvascular scan was also acquired for each animal by administering an infusion of non-targeted microbubbles. These scans were paired with co-registered molecular images (VEGFR2-targeted microbubbles), the vessels were segmented, and the spatial relationships between vessels and VEGFR2 targeting locations were analyzed. In 5 animals, an additional scan was performed in which the animal received a bolus of microbubbles targeted to E- and P-selectin. Vessel tortuosity as a function of distance from VEGF and selectin targeting was analyzed in these animals. Results Although resulting differences in image intensity due to varying microbubble dose were not significant between the two lowest doses, superharmonic imaging had significantly higher contrast-to-tissue ratio (CTR) than multi-pulse imaging (mean across all doses: 13.98 dB for molecular acoustic angiography vs. 0.53 dB for multi-pulse imaging; p = 4.9 × 10−10). Analysis of registered microvascular and molecular imaging volumes indicated that vessel tortuosity decreases with increasing distance from both VEGFR2 and selectin targeting sites. Conclusions Molecular acoustic angiography (superharmonic molecular imaging) exhibited a significant increase in CTR at all doses tested due to superior rejection of tissue artifact signals. Due to the high resolution of acoustic angiography molecular imaging, it is possible to analyze spatial relationships in aligned microvascular and molecular superharmonic imaging volumes. Future studies are required to separate the effects of biomarker expression and blood flow kinetics in comparing local tortuosity differences between different endothelial markers such as VEGFR2, E-selectin and P-selectin. PMID:27519522

  6. Digital Image Processing Technique for Breast Cancer Detection

    NASA Astrophysics Data System (ADS)

    Guzmán-Cabrera, R.; Guzmán-Sepúlveda, J. R.; Torres-Cisneros, M.; May-Arrioja, D. A.; Ruiz-Pinales, J.; Ibarra-Manzano, O. G.; Aviña-Cervantes, G.; Parada, A. González

    2013-09-01

    Breast cancer is the most common cause of death in women and the second leading cause of cancer deaths worldwide. Primary prevention in the early stages of the disease becomes complex as the causes remain almost unknown. However, some typical signatures of this disease, such as masses and microcalcifications appearing on mammograms, can be used to improve early diagnostic techniques, which is critical for women’s quality of life. X-ray mammography is the main test used for screening and early diagnosis, and its analysis and processing are the keys to improving breast cancer prognosis. As masses and benign glandular tissue typically appear with low contrast and often very blurred, several computer-aided diagnosis schemes have been developed to support radiologists and internists in their diagnosis. In this article, an approach is proposed to effectively analyze digital mammograms based on texture segmentation for the detection of early stage tumors. The proposed algorithm was tested over several images taken from the digital database for screening mammography for cancer research and diagnosis, and it was found to be absolutely suitable to distinguish masses and microcalcifications from the background tissue using morphological operators and then extract them through machine learning techniques and a clustering algorithm for intensity-based segmentation.

  7. Strain Rate Tensor Estimation in Cine Cardiac MRI Based on Elastic Image Registration

    NASA Astrophysics Data System (ADS)

    Sánchez-Ferrero, Gonzalo Vegas; Vega, Antonio Tristán; Grande, Lucilio Cordero; de La Higuera, Pablo Casaseca; Fernández, Santiago Aja; Fernández, Marcos Martín; López, Carlos Alberola

    In this work we propose an alternative method to estimate and visualize the Strain Rate Tensor (SRT) in Magnetic Resonance Images (MRI) when Phase Contrast MRI (PCMRI) and Tagged MRI (TMRI) are not available. This alternative is based on image processing techniques. Concretely, image registration algorithms are used to estimate the movement of the myocardium at each point. Additionally, a consistency checking method is presented to validate the accuracy of the estimates when no golden standard is available. Results prove that the consistency checking method provides an upper bound of the mean squared error of the estimate. Our experiments with real data show that the registration algorithm provides a useful deformation field to estimate the SRT fields. A classification between regional normal and dysfunctional contraction patterns, as compared with experts diagnosis, points out that the parameters extracted from the estimated SRT can represent these patterns. Additionally, a scheme for visualizing and analyzing the local behavior of the SRT field is presented.

  8. A novel approach based on OCT for tongue inspection in traditional Chinese medicine

    NASA Astrophysics Data System (ADS)

    Dong, Haixin; Guo, Zhouyi; Zeng, Changchun; Zhong, Huiqing; Gong, Xin; He, Yonghong; Wang, Ruikang K.; Liu, Songhao

    2007-11-01

    The aim of this report is to establish a tongue inspection method based on OCT imaging for quantifying the tongue properties in traditional Chinese medical diagnosis. The measurement was performed in the model of rats suffering with Spleen-Stomach Dampness-Heat Syndrome using OCT equipment, OCT image and histology estimates of the glossal layer of microstructure were obtained, and the accurate thickness and moisture degree of the tongue coating were analyzed. These OCT image showed that tongue every layer matched the histology estimates of the glossal microstructure, and compared with normal control group the thickness of tongue coating increased in rats suffering with Spleen-Stomach Dampness-Heat Syndrome than normal control rats (P<0.01), yet the moisture degree of tongue body of model group decreased (P<0.01). Therefore, OCT image technique may be benefit and helpful as a tool to provide an objective diagnostic standard for study on tongue inspection in the clinical practice and research of TCM.

  9. A Pixel Correlation Technique for Smaller Telescopes to Measure Doubles

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.

    2013-04-01

    Pixel correlation uses the same reduction techniques as speckle imaging but relies on autocorrelation among captured pixel hits rather than true speckles. A video camera operating at speeds (8-66 milliseconds) similar to lucky imaging to capture 400-1,000 video frames. The AVI files are converted to bitmap images and analyzed using the interferometric algorithms in REDUC using all frames. This results in a series of corellograms from which theta and rho can be measured. Results using a 20 cm (8") Dall-Kirkham working at f22.5 are presented for doubles with separations between 1" to 5.7" under average seeing conditions. I conclude that this form of visualizing and analyzing visual double stars is a viable alternative to lucky imaging that can be employed by telescopes that are too small in aperture to capture a sufficient number of speckles for true speckle interferometry.

  10. Defogging of road images using gain coefficient-based trilateral filter

    NASA Astrophysics Data System (ADS)

    Singh, Dilbag; Kumar, Vijay

    2018-01-01

    Poor weather conditions are responsible for most of the road accidents year in and year out. Poor weather conditions, such as fog, degrade the visibility of objects. Thus, it becomes difficult for drivers to identify the vehicles in a foggy environment. The dark channel prior (DCP)-based defogging techniques have been found to be an efficient way to remove fog from road images. However, it produces poor results when image objects are inherently similar to airlight and no shadow is cast on them. To eliminate this problem, a modified restoration model-based DCP is developed to remove the fog from road images. The transmission map is also refined by developing a gain coefficient-based trilateral filter. Thus, the proposed technique has an ability to remove fog from road images in an effective manner. The proposed technique is compared with seven well-known defogging techniques on two benchmark foggy images datasets and five real-time foggy images. The experimental results demonstrate that the proposed approach is able to remove the different types of fog from roadside images as well as significantly improve the image's visibility. It also reveals that the restored image has little or no artifacts.

  11. Improved Space Object Observation Techniques Using CMOS Detectors

    NASA Astrophysics Data System (ADS)

    Schildknecht, T.; Hinze, A.; Schlatter, P.; Silha, J.; Peltonen, J.; Santti, T.; Flohrer, T.

    2013-08-01

    CMOS-sensors, or in general Active Pixel Sensors (APS), are rapidly replacing CCDs in the consumer camera market. Due to significant technological advances during the past years these devices start to compete with CCDs also for demanding scientific imaging applications, in particular in the astronomy community. CMOS detectors offer a series of inherent advantages compared to CCDs, due to the structure of their basic pixel cells, which each contain their own amplifier and readout electronics. The most prominent advantages for space object observations are the extremely fast and flexible readout capabilities, feasibility for electronic shuttering and precise epoch registration, and the potential to perform image processing operations on-chip and in real-time. Presently applied and proposed optical observation strategies for space debris surveys and space surveillance applications had to be analyzed. The major design drivers were identified and potential benefits from using available and future CMOS sensors were assessed. The major challenges and design drivers for ground-based and space-based optical observation strategies have been analyzed. CMOS detector characteristics were critically evaluated and compared with the established CCD technology, especially with respect to the above mentioned observations. Similarly, the desirable on-chip processing functionalities which would further enhance the object detection and image segmentation were identified. Finally, the characteristics of a particular CMOS sensor available at the Zimmerwald observatory were analyzed by performing laboratory test measurements.

  12. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  13. A double-sided microscope to realize whole-ganglion imaging of membrane potential in the medicinal leech

    PubMed Central

    Wagenaar, Daniel A

    2017-01-01

    Studies of neuronal network emergence during sensory processing and motor control are greatly facilitated by technologies that allow us to simultaneously record the membrane potential dynamics of a large population of neurons in single cell resolution. To achieve whole-brain recording with the ability to detect both small synaptic potentials and action potentials, we developed a voltage-sensitive dye (VSD) imaging technique based on a double-sided microscope that can image two sides of a nervous system simultaneously. We applied this system to the segmental ganglia of the medicinal leech. Double-sided VSD imaging enabled simultaneous recording of membrane potential events from almost all of the identifiable neurons. Using data obtained from double-sided VSD imaging, we analyzed neuronal dynamics in both sensory processing and generation of behavior and constructed functional maps for identification of neurons contributing to these processes. PMID:28944754

  14. Object localization in handheld thermal images for fireground understanding

    NASA Astrophysics Data System (ADS)

    Vandecasteele, Florian; Merci, Bart; Jalalvand, Azarakhsh; Verstockt, Steven

    2017-05-01

    Despite the broad application of the handheld thermal imaging cameras in firefighting, its usage is mostly limited to subjective interpretation by the person carrying the device. As remedies to overcome this limitation, object localization and classification mechanisms could assist the fireground understanding and help with the automated localization, characterization and spatio-temporal (spreading) analysis of the fire. An automated understanding of thermal images can enrich the conventional knowledge-based firefighting techniques by providing the information from the data and sensing-driven approaches. In this work, transfer learning is applied on multi-labeling convolutional neural network architectures for object localization and recognition in monocular visual, infrared and multispectral dynamic images. Furthermore, the possibility of analyzing fire scene images is studied and their current limitations are discussed. Finally, the understanding of the room configuration (i.e., objects location) for indoor localization in reduced visibility environments and the linking with Building Information Models (BIM) are investigated.

  15. The recognition of ocean red tide with hyper-spectral-image based on EMD

    NASA Astrophysics Data System (ADS)

    Zhao, Wencang; Wei, Hongli; Shi, Changjiang; Ji, Guangrong

    2008-05-01

    A new technique is introduced in this paper regarding red tide recognition with remotely sensed hyper-spectral images based on empirical mode decomposition (EMD), from an artificial red tide experiment in the East China Sea in 2002. A set of characteristic parameters that describe absorbing crest and reflecting crest of the red tide and its recognition methods are put forward based on general picture data, with which the spectral information of certain non-dominant alga species of a red tide occurrence is analyzed for establishing the foundation to estimate the species. Comparative experiments have proved that the method is effective. Meanwhile, the transitional area between red-tide zone and non-red-tide zone can be detected with the information of thickness of algae influence, with which a red tide can be forecast.

  16. Texture-Based Analysis of 100 MR Examinations of Head and Neck Tumors - Is It Possible to Discriminate Between Benign and Malignant Masses in a Multicenter Trial?

    PubMed

    Fruehwald-Pallamar, J; Hesselink, J R; Mafee, M F; Holzer-Fruehwald, L; Czerny, C; Mayerhoefer, M E

    2016-02-01

    To evaluate whether texture-based analysis of standard MRI sequences can help in the discrimination between benign and malignant head and neck tumors. The MR images of 100 patients with a histologically clarified head or neck mass, from two different institutions, were analyzed. Texture-based analysis was performed using texture analysis software, with region of interest measurements for 2 D and 3 D evaluation independently for all axial sequences. COC, RUN, GRA, ARM, and WAV features were calculated for all ROIs. 10 texture feature subsets were used for a linear discriminant analysis, in combination with k-nearest-neighbor classification. Benign and malignant tumors were compared with regard to texture-based values. There were differences in the images from different field-strength scanners, as well as from different vendors. For the differentiation of benign and malignant tumors, we found differences on STIR and T2-weighted images for 2 D, and on contrast-enhanced T1-TSE with fat saturation for 3 D evaluation. In a separate analysis of the subgroups 1.5 and 3 Tesla, more discriminating features were found. Texture-based analysis is a useful tool in the discrimination of benign and malignant tumors when performed on one scanner with the same protocol. We cannot recommend this technique for the use of multicenter studies with clinical data. 2 D/3 D texture-based analysis can be performed in head and neck tumors. Texture-based analysis can differentiate between benign and malignant masses. Analyzed MR images should originate from one scanner with an identical protocol. © Georg Thieme Verlag KG Stuttgart · New York.

  17. Emergence of Convolutional Neural Network in Future Medicine: Why and How. A Review on Brain Tumor Segmentation

    NASA Astrophysics Data System (ADS)

    Alizadeh Savareh, Behrouz; Emami, Hassan; Hajiabadi, Mohamadreza; Ghafoori, Mahyar; Majid Azimi, Seyed

    2018-03-01

    Manual analysis of brain tumors magnetic resonance images is usually accompanied by some problem. Several techniques have been proposed for the brain tumor segmentation. This study will be focused on searching popular databases for related studies, theoretical and practical aspects of Convolutional Neural Network surveyed in brain tumor segmentation. Based on our findings, details about related studies including the datasets used, evaluation parameters, preferred architectures and complementary steps analyzed. Deep learning as a revolutionary idea in image processing, achieved brilliant results in brain tumor segmentation too. This can be continuing until the next revolutionary idea emerging.

  18. Evidence of tampering in watermark identification

    NASA Astrophysics Data System (ADS)

    McLauchlan, Lifford; Mehrübeoglu, Mehrübe

    2009-08-01

    In this work, watermarks are embedded in digital images in the discrete wavelet transform (DWT) domain. Principal component analysis (PCA) is performed on the DWT coefficients. Next higher order statistics based on the principal components and the eigenvalues are determined for different sets of images. Feature sets are analyzed for different types of attacks in m dimensional space. The results demonstrate the separability of the features for the tampered digital copies. Different feature sets are studied to determine more effective tamper evident feature sets. The digital forensics, the probable manipulation(s) or modification(s) performed on the digital information can be identified using the described technique.

  19. Image resolution enhancement via image restoration using neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Shuangteng; Lu, Yihong

    2011-04-01

    Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.

  20. Practical issues of hyperspectral imaging analysis of solid dosage forms.

    PubMed

    Amigo, José Manuel

    2010-09-01

    Hyperspectral imaging techniques have widely demonstrated their usefulness in different areas of interest in pharmaceutical research during the last decade. In particular, middle infrared, near infrared, and Raman methods have gained special relevance. This rapid increase has been promoted by the capability of hyperspectral techniques to provide robust and reliable chemical and spatial information on the distribution of components in pharmaceutical solid dosage forms. Furthermore, the valuable combination of hyperspectral imaging devices with adequate data processing techniques offers the perfect landscape for developing new methods for scanning and analyzing surfaces. Nevertheless, the instrumentation and subsequent data analysis are not exempt from issues that must be thoughtfully considered. This paper describes and discusses the main advantages and drawbacks of the measurements and data analysis of hyperspectral imaging techniques in the development of solid dosage forms.

  1. Fourier-Mellin moment-based intertwining map for image encryption

    NASA Astrophysics Data System (ADS)

    Kaur, Manjit; Kumar, Vijay

    2018-03-01

    In this paper, a robust image encryption technique that utilizes Fourier-Mellin moments and intertwining logistic map is proposed. Fourier-Mellin moment-based intertwining logistic map has been designed to overcome the issue of low sensitivity of an input image. Multi-objective Non-Dominated Sorting Genetic Algorithm (NSGA-II) based on Reinforcement Learning (MNSGA-RL) has been used to optimize the required parameters of intertwining logistic map. Fourier-Mellin moments are used to make the secret keys more secure. Thereafter, permutation and diffusion operations are carried out on input image using secret keys. The performance of proposed image encryption technique has been evaluated on five well-known benchmark images and also compared with seven well-known existing encryption techniques. The experimental results reveal that the proposed technique outperforms others in terms of entropy, correlation analysis, a unified average changing intensity and the number of changing pixel rate. The simulation results reveal that the proposed technique provides high level of security and robustness against various types of attacks.

  2. Is there a trend in CT scanning scaphoid nonunions for deformity assessment?-A systematic review.

    PubMed

    Ten Berg, Paul W L; de Roo, Marieke G A; Maas, Mario; Strackee, Simon D

    2017-06-01

    The effect of scaphoid nonunion deformity on wrist function is uncertain due to the lack of reliable imaging tools. Advanced three-dimensional (3-D) computed tomography (CT)-based imaging techniques may improve deformity assessment by using a mirrored image of the contralateral intact wrist as anatomic reference. The implementation of such techniques depends on the extent to which conventional CT is currently used in standard practice. The purpose of this systematic review of medical literature was to analyze the trend in CT scanning scaphoid nonunions, either unilaterally or bilaterally. Using Medline and Embase databases, two independent reviewers searched for original full-length clinical articles describing series with at least five patients focusing on reconstructive surgery of scaphoid nonunions with bone grafting and/or fixation, from the years 2000-2015. We excluded reports focusing on only nonunions suspected for avascular necrosis and/or treated with vascularized bone grafting, as their workup often includes magnetic resonance imaging. For data analysis, we evaluated the use of CT scans and distinguished between uni- and bilateral, and pre- and postoperative scans. Seventy-seven articles were included of which 16 were published between 2000 and 2005, 19 between 2006 and 2010, and 42 between 2011 and 2015. For these consecutive intervals, the rates of articles describing the use of pre- and postoperative CT scans increased from 13%, to 16%, to 31%, and from 25%, to 32%, to 52%, respectively. Hereof, only two (3%) articles described the use of bilateral CT scans. There is an evident trend in performing unilateral CT scans before and after reconstructive surgery of a scaphoid nonunion. To improve assessment of scaphoid nonunion deformity using 3-D CT-based imaging techniques, we recommend scanning the contralateral wrist as well. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. RMB identification based on polarization parameters inversion imaging

    NASA Astrophysics Data System (ADS)

    Liu, Guoyan; Gao, Kun; Liu, Xuefeng; Ni, Guoqiang

    2016-10-01

    Social order is threatened by counterfeit money. Conventional anti-counterfeit technology is much too old to identify its authenticity or not. The intrinsic difference between genuine notes and counterfeit notes is its paper tissue. In this paper a new technology of detecting RMB is introduced, the polarization parameter indirect microscopic imaging technique. A conventional reflection microscopic system is used as the basic optical system, and inserting into it with polarization-modulation mechanics. The near-field structural characteristics can be delivered by optical wave and material coupling. According to coupling and conduction physics, calculate the changes of optical wave parameters, then get the curves of the intensity of the image. By analyzing near-field polarization parameters in nanoscale, finally calculate indirect polarization parameter imaging of the fiber of the paper tissue in order to identify its authenticity.

  4. High resolution OCT image generation using super resolution via sparse representation

    NASA Astrophysics Data System (ADS)

    Asif, Muhammad; Akram, Muhammad Usman; Hassan, Taimur; Shaukat, Arslan; Waqar, Razi

    2017-02-01

    In this paper we propose a technique for obtaining a high resolution (HR) image from a single low resolution (LR) image -using joint learning dictionary - on the basis of image statistic research. It suggests that with an appropriate choice of an over-complete dictionary, image patches can be well represented as a sparse linear combination. Medical imaging for clinical analysis and medical intervention is being used for creating visual representations of the interior of a body, as well as visual representation of the function of some organs or tissues (physiology). A number of medical imaging techniques are in use like MRI, CT scan, X-rays and Optical Coherence Tomography (OCT). OCT is one of the new technologies in medical imaging and one of its uses is in ophthalmology where it is being used for analysis of the choroidal thickness in the eyes in healthy and disease states such as age-related macular degeneration, central serous chorioretinopathy, diabetic retinopathy and inherited retinal dystrophies. We have proposed a technique for enhancing the OCT images which can be used for clearly identifying and analyzing the particular diseases. Our method uses dictionary learning technique for generating a high resolution image from a single input LR image. We train two joint dictionaries, one with OCT images and the second with multiple different natural images, and compare the results with previous SR technique. Proposed method for both dictionaries produces HR images which are comparatively superior in quality with the other proposed method of SR. Proposed technique is very effective for noisy OCT images and produces up-sampled and enhanced OCT images.

  5. High-Resolution Ultrasound Imaging Using Model-Bases Iterative Reconstruction For Canister Degradation Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatzidakis, Stylianos; Jarrell, Joshua J; Scaglione, John M

    The inspection of the dry storage canisters that house spent nuclear fuel is an important issue facing the nuclear industry; currently, there are limited options available to provide for even minimal inspections. An issue of concern is stress corrosion cracking (SCC) in austenitic stainless steel canisters. SCC is difficult to predict and exhibits small crack opening displacements on the order of 15 30 m. Nondestructive examination (NDE) of such microscopic cracks is especially challenging, and it may be possible to miss SCC during inspections. The coarse grain microstructure at the heat affected zone reduces the achievable sensitivity of conventional ultrasoundmore » techniques. At Oak Ridge National Laboratory, a tomographic approach is under development to improve SCC detection using ultrasound guided waves and model-based iterative reconstruction (MBIR). Ultrasound-guided waves propagate parallel to the physical boundaries of the surface and allow for rapid inspection of a large area from a single probe location. MBIR is a novel, effective probabilistic imaging tool that offers higher precision and better image quality than current reconstruction techniques. This paper analyzes the canister environment, stainless steel microstructure, and SCC characteristics. The end goal is to demonstrate the feasibility of an NDE system based on ultrasonic guided waves and MBIR for canister degradation and to produce radar-like images of the canister surface with significantly improved image quality. The proposed methodology can potentially reduce human radiation exposure, result in lower operational costs, and provide a methodology that can be used to verify canister integrity in-situ during extended storage« less

  6. Computer vision based method and system for online measurement of geometric parameters of train wheel sets.

    PubMed

    Zhang, Zhi-Feng; Gao, Zhan; Liu, Yuan-Yuan; Jiang, Feng-Chun; Yang, Yan-Li; Ren, Yu-Fen; Yang, Hong-Jun; Yang, Kun; Zhang, Xiao-Dong

    2012-01-01

    Train wheel sets must be periodically inspected for possible or actual premature failures and it is very significant to record the wear history for the full life of utilization of wheel sets. This means that an online measuring system could be of great benefit to overall process control. An online non-contact method for measuring a wheel set's geometric parameters based on the opto-electronic measuring technique is presented in this paper. A charge coupled device (CCD) camera with a selected optical lens and a frame grabber was used to capture the image of the light profile of the wheel set illuminated by a linear laser. The analogue signals of the image were transformed into corresponding digital grey level values. The 'mapping function method' is used to transform an image pixel coordinate to a space coordinate. The images of wheel sets were captured when the train passed through the measuring system. The rim inside thickness and flange thickness were measured and analyzed. The spatial resolution of the whole image capturing system is about 0.33 mm. Theoretic and experimental results show that the online measurement system based on computer vision can meet wheel set measurement requirements.

  7. Visualization of Subsurface Defects in Composites using a Focal Plane Array Infrared Camera

    NASA Technical Reports Server (NTRS)

    Plotnikov, Yuri A.; Winfree, William P.

    1999-01-01

    A technique for enhanced defect visualization in composites via transient thermography is presented in this paper. The effort targets automated defect map construction for multiple defects located in the observed area. Experimental data were collected on composite panels of different thickness with square inclusions and flat bottom holes of different depth and orientation. The time evolution of the thermal response and spatial thermal profiles are analyzed. The pattern generated by carbon fibers and the vignetting effect of the focal plane array camera make defect visualization difficult. An improvement of the defect visibility is made by the pulse phase technique and the spatial background treatment. The relationship between a size of a defect and its reconstructed image is analyzed as well. The image processing technique for noise reduction is discussed.

  8. In-flight edge response measurements for high-spatial-resolution remote sensing systems

    NASA Astrophysics Data System (ADS)

    Blonski, Slawomir; Pagnutti, Mary A.; Ryan, Robert; Zanoni, Vickie

    2002-09-01

    In-flight measurements of spatial resolution were conducted as part of the NASA Scientific Data Purchase Verification and Validation process. Characterization included remote sensing image products with ground sample distance of 1 meter or less, such as those acquired with the panchromatic imager onboard the IKONOS satellite and the airborne ADAR System 5500 multispectral instrument. Final image products were used to evaluate the effects of both the image acquisition system and image post-processing. Spatial resolution was characterized by full width at half maximum of an edge-response-derived line spread function. The edge responses were analyzed using the tilted-edge technique that overcomes the spatial sampling limitations of the digital imaging systems. As an enhancement to existing algorithms, the slope of the edge response and the orientation of the edge target were determined by a single computational process. Adjacent black and white square panels, either painted on a flat surface or deployed as tarps, formed the ground-based edge targets used in the tests. Orientation of the deployable tarps was optimized beforehand, based on simulations of the imaging system. The effects of such factors as acquisition geometry, temporal variability, Modulation Transfer Function compensation, and ground sample distance on spatial resolution were investigated.

  9. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  10. High-speed digital imaging of cytosolic Ca2+ and contraction in single cardiomyocytes.

    PubMed

    O'Rourke, B; Reibel, D K; Thomas, A P

    1990-07-01

    A charge-coupled device (CCD) camera, with the capacity for simultaneous spatially resolved photon counting and rapid frame transfer, was utilized for high-speed digital image collection from an inverted epifluorescence microscope. The unique properties of the CCD detector were applied to an analysis of cell shortening and the Ca2+ transient from fluorescence images of fura-2-loaded [corrected] cardiomyocytes. On electrical stimulation of the cell, a series of sequential subimages was collected and used to create images of Ca2+ within the cell during contraction. The high photosensitivity of the camera, combined with a detector-based frame storage technique, permitted collection of fluorescence images 10 ms apart. This rate of image collection was sufficient to resolve the rapid events of contraction, e.g., the upstroke of the Ca2+ transient (less than 40 ms) and the time to peak shortening (less than 80 ms). The technique was used to examine the effects of beta-adrenoceptor activation, fura-2 load, and stimulus frequency on cytosolic Ca2+ transients and contractions of single cardiomyocytes. beta-Adrenoceptor stimulation resulted in pronounced increases in peak Ca2+, maximal rates of rise and decay of Ca2+, extent of shortening, and maximal velocities of shortening and relaxation. Raising the intracellular load of fura-2 had little effect on the rising phase of Ca2+ or the extent of shortening but extended the duration of the Ca2+ transient and contraction. In related experiments utilizing differential-interference contrast microscopy, the same technique was applied to visualize sarcomere dynamics in contracting cells. This newly developed technique is a versatile tool for analyzing the Ca2+ transient and mechanical events in studies of excitation-contraction coupling in cardiomyocytes.

  11. Spectral analysis of groove spacing on Ganymede

    NASA Technical Reports Server (NTRS)

    Grimm, R. E.

    1984-01-01

    The technique used to analyze groove spacing on Ganymede is presented. Data from Voyager images are used determine the surface topography and position of the grooves. Power spectal estimates are statistically analyzed and sample data is included.

  12. Covariance of lucky images for increasing objects contrast: diffraction-limited images in ground-based telescopes

    NASA Astrophysics Data System (ADS)

    Cagigal, Manuel P.; Valle, Pedro J.; Colodro-Conde, Carlos; Villó-Pérez, Isidro; Pérez-Garrido, Antonio

    2016-01-01

    Images of stars adopt shapes far from the ideal Airy pattern due to atmospheric density fluctuations. Hence, diffraction-limited images can only be achieved by telescopes without atmospheric influence, e.g. spatial telescopes, or by using techniques like adaptive optics or lucky imaging. In this paper, we propose a new computational technique based on the evaluation of the COvariancE of Lucky Images (COELI). This technique allows us to discover companions to main stars by taking advantage of the atmospheric fluctuations. We describe the algorithm and we carry out a theoretical analysis of the improvement in contrast. We have used images taken with 2.2-m Calar Alto telescope as a test bed for the technique resulting that, under certain conditions, telescope diffraction limit is clearly reached.

  13. Development and application of operational techniques for the inventory and monitoring of resources and uses for the Texas coastal zone

    NASA Technical Reports Server (NTRS)

    Harwood, P. (Principal Investigator); Malin, P.; Finley, R.; Mcculloch, S.; Murphy, D.; Hupp, B.; Schell, J. A.

    1977-01-01

    The author has identified the following significant results. Four LANDSAT scenes were analyzed for the Harbor Island area test sites to produce land cover and land use maps using both image interpretation and computer-assisted techniques. When evaluated against aerial photography, the mean accuracy for three scenes was 84% for the image interpretation product and 62% for the computer-assisted classification maps. Analysis of the fourth scene was not completed using the image interpretation technique, because of poor quality, false color composite, but was available from the computer technique. Preliminary results indicate that these LANDSAT products can be applied to a variety of planning and management activities in the Texas coastal zone.

  14. Optimized imaging of the midface and orbits

    PubMed Central

    Langner, Sönke

    2015-01-01

    A variety of imaging techniques are available for imaging the midface and orbits. This review article describes the different imaging techniques based on the recent literature and discusses their impact on clinical routine imaging. Imaging protocols are presented for different diseases and the different imaging modalities. PMID:26770279

  15. Osteoporosis Imaging: State of the Art and Advanced Imaging

    PubMed Central

    2012-01-01

    Osteoporosis is becoming an increasingly important public health issue, and effective treatments to prevent fragility fractures are available. Osteoporosis imaging is of critical importance in identifying individuals at risk for fractures who would require pharmacotherapy to reduce fracture risk and also in monitoring response to treatment. Dual x-ray absorptiometry is currently the state-of-the-art technique to measure bone mineral density and to diagnose osteoporosis according to the World Health Organization guidelines. Motivated by a 2000 National Institutes of Health consensus conference, substantial research efforts have focused on assessing bone quality by using advanced imaging techniques. Among these techniques aimed at better characterizing fracture risk and treatment effects, high-resolution peripheral quantitative computed tomography (CT) currently plays a central role, and a large number of recent studies have used this technique to study trabecular and cortical bone architecture. Other techniques to analyze bone quality include multidetector CT, magnetic resonance imaging, and quantitative ultrasonography. In addition to quantitative imaging techniques measuring bone density and quality, imaging needs to be used to diagnose prevalent osteoporotic fractures, such as spine fractures on chest radiographs and sagittal multidetector CT reconstructions. Radiologists need to be sensitized to the fact that the presence of fragility fractures will alter patient care, and these fractures need to be described in the report. This review article covers state-of-the-art imaging techniques to measure bone mineral density, describes novel techniques to study bone quality, and focuses on how standard imaging techniques should be used to diagnose prevalent osteoporotic fractures. © RSNA, 2012 PMID:22438439

  16. Particle Characterization for a Protein Drug Product Stored in Pre-Filled Syringes Using Micro-Flow Imaging, Archimedes, and Quartz Crystal Microbalance with Dissipation.

    PubMed

    Zheng, Songyan; Puri, Aastha; Li, Jinjiang; Jaiswal, Archana; Adams, Monica

    2017-01-01

    Micro-flow imaging (MFI) has been used for formulation development for analyzing sub-visible particles. Archimedes, a novel technique for analyzing sub-micron particles, has been considered as an orthogonal method to currently existing techniques. This study utilized these two techniques to investigate the effectiveness of polysorbate (PS-80) in mitigating the particle formation of a therapeutic protein formulation stored in silicone oil-coated pre-filled syringes. The results indicated that PS-80 prevented the formation of both protein and silicone oil particles. In the case of protein particles, PS-80 might involve in the interactions with the hydrophobic patches of protein, air bubbles, and the stressed surfaces of silicone oil-coated pre-filled syringes. Such interactions played a role in mitigating the formation of protein particles. Subsequently, quartz crystal microbalance with dissipation (QCM-D) was utilized to characterize the interactions associated with silicone oil, protein, and PS-80 in the solutions. Based on QCM-D results, we proposed that PS-80 likely formed a layer on the interior surfaces of syringes. As a result, the adsorbed PS-80 might block the leakage of silicone oil from the surfaces to solution so that the silicone oil particles were mitigated at the presence of PS-80. Overall, this study demonstrated the necessary of utilizing these three techniques cooperatively in order to better understand the interfacial role of PS-80 in mitigating the formation of protein and silicone oil particles.

  17. Implementation of an RBF neural network on embedded systems: real-time face tracking and identity verification.

    PubMed

    Yang, Fan; Paindavoine, M

    2003-01-01

    This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.

  18. Automated Ground-based Time-lapse Camera Monitoring of West Greenland ice sheet outlet Glaciers: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.

    2008-12-01

    Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous data volumes.

  19. Comparative analysis of numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.

    2017-07-01

    Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.

  20. Computed tomography: Will the slices reveal the truth

    PubMed Central

    Haridas, Harish; Mohan, Abarajithan; Papisetti, Sravanthi; Ealla, Kranti K. R.

    2016-01-01

    With the advances in the field of imaging sciences, new methods have been developed in dental radiology. These include digital radiography, density analyzing methods, cone beam computed tomography (CBCT), magnetic resonance imaging, ultrasound, and nuclear imaging techniques, which provide high-resolution detailed images of oral structures. The current review aims to critically elaborate the use of CBCT in endodontics. PMID:27652253

  1. Change detection from remotely sensed images: From pixel-based to object-based approaches

    NASA Astrophysics Data System (ADS)

    Hussain, Masroor; Chen, Dongmei; Cheng, Angela; Wei, Hui; Stanley, David

    2013-06-01

    The appetite for up-to-date information about earth's surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection.

  2. Mapping accuracy via spectrally and structurally based filtering techniques: comparisons through visual observations

    NASA Astrophysics Data System (ADS)

    Chockalingam, Letchumanan

    2005-01-01

    The data of Gunung Ledang region of Malaysia acquired through LANDSAT are considered to map certain hydrogeolocial features. To map these significant features, image-processing tools such as contrast enhancement, edge detection techniques are employed. The advantages of these techniques over the other methods are evaluated from the point of their validity in properly isolating features of hydrogeolocial interest are discussed. As these techniques take the advantage of spectral aspects of the images, these techniques have several limitations to meet the objectives. To discuss these limitations, a morphological transformation, which generally considers the structural aspects rather than spectral aspects from the image, are applied to provide comparisons between the results derived from spectral based and the structural based filtering techniques.

  3. Optical remote sensing and correlation of office equipment functional state and stress levels via power quality disturbances inefficiencies

    NASA Astrophysics Data System (ADS)

    Sternberg, Oren; Bednarski, Valerie R.; Perez, Israel; Wheeland, Sara; Rockway, John D.

    2016-09-01

    Non-invasive optical techniques pertaining to the remote sensing of power quality disturbances (PQD) are part of an emerging technology field typically dominated by radio frequency (RF) and invasive-based techniques. Algorithms and methods to analyze and address PQD such as probabilistic neural networks and fully informed particle swarms have been explored in industry and academia. Such methods are tuned to work with RF equipment and electronics in existing power grids. As both commercial and defense assets are heavily power-dependent, understanding electrical transients and failure events using non-invasive detection techniques is crucial. In this paper we correlate power quality empirical models to the observed optical response. We also empirically demonstrate a first-order approach to map household, office and commercial equipment PQD to user functions and stress levels. We employ a physics-based image and signal processing approach, which demonstrates measured non-invasive (remote sensing) techniques to detect and map the base frequency associated with the power source to the various PQD on a calibrated source.

  4. Infrared/microwave (IR/MW) micromirror array beam combiner design and analysis.

    PubMed

    Tian, Yi; Lv, Lijun; Jiang, Liwei; Wang, Xin; Li, Yanhong; Yu, Haiming; Feng, Xiaochen; Li, Qi; Zhang, Li; Li, Zhuo

    2013-08-01

    We investigated the design method of an infrared (IR)/microwave (MW) micromirror array type of beam combiner. The size of micromirror is in microscopic levels and comparable to MW wavelengths, so that the MW will not react in these dimensions, whereas the much shorter optical wavelengths will be reflected by them. Hence, the MW multilayered substrate was simplified and designed using transmission line theory. The beam combiner used an IR wavefront-division imaging technique to reflect the IR radiation image to the unit under test (UUT)'s pupil in a parallel light path. In addition, the boresight error detected by phase monopulse radar was analyzed using a moment-of method (MoM) and multilevel fast multipole method (MLFMM) acceleration technique. The boresight error introduced by the finite size of the beam combiner was less than 1°. Finally, in order to verify the wavefront-division imaging technique, a prototype of a micromirror array was fabricated, and IR images were tested. The IR images obtained by the thermal imager verified the correctness of the wavefront-division imaging technique.

  5. Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses

    NASA Astrophysics Data System (ADS)

    Wong, Stephen T. C.; Knowlton, Robert C.; Hoo, Kent S.; Huang, H. K.

    1995-05-01

    Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the brain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstation to aid the noninvasive presurgical evaluation of epilepsy patients. These techniques include online access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitation of structural and functional information contained in the registered images. For illustration, we describe the use of these techniques in a patient case of nonlesional neocortical epilepsy. We also present out future work based on preliminary studies.

  6. Automated assessment of thigh composition using machine learning for Dixon magnetic resonance images.

    PubMed

    Yang, Yu Xin; Chong, Mei Sian; Tay, Laura; Yew, Suzanne; Yeo, Audrey; Tan, Cher Heng

    2016-10-01

    To develop and validate a machine learning based automated segmentation method that jointly analyzes the four contrasts provided by Dixon MRI technique for improved thigh composition segmentation accuracy. The automatic detection of body composition is formulized as a three-class classification issue. Each image voxel in the training dataset is assigned with a correct label. A voxel classifier is trained and subsequently used to predict unseen data. Morphological operations are finally applied to generate volumetric segmented images for different structures. We applied this algorithm on datasets of (1) four contrast images, (2) water and fat images, and (3) unsuppressed images acquired from 190 subjects. The proposed method using four contrasts achieved most accurate and robust segmentation compared to the use of combined fat and water images and the use of unsuppressed image, average Dice coefficients of 0.94 ± 0.03, 0.96 ± 0.03, 0.80 ± 0.03, and 0.97 ± 0.01 has been achieved to bone region, subcutaneous adipose tissue (SAT), inter-muscular adipose tissue (IMAT), and muscle respectively. Our proposed method based on machine learning produces accurate tissue quantification and showed an effective use of large information provided by the four contrast images from Dixon MRI.

  7. Speckle tracking and speckle content based composite strain imaging for solid and fluid filled lesions.

    PubMed

    Rabbi, Md Shifat-E; Hasan, Md Kamrul

    2017-02-01

    Strain imaging though for solid lesions provides an effective way for determining their pathologic condition by displaying the tissue stiffness contrast, for fluid filled lesions such an imaging is yet an open problem. In this paper, we propose a novel speckle content based strain imaging technique for visualization and classification of fluid filled lesions in elastography after automatic identification of the presence of fluid filled lesions. Speckle content based strain, defined as a function of speckle density based on the relationship between strain and speckle density, gives an indirect strain value for fluid filled lesions. To measure the speckle density of the fluid filled lesions, two new criteria based on oscillation count of the windowed radio frequency signal and local variance of the normalized B-mode image are used. An improved speckle tracking technique is also proposed for strain imaging of the solid lesions and background. A wavelet-based integration technique is then proposed for combining the strain images from these two techniques for visualizing both the solid and fluid filled lesions from a common framework. The final output of our algorithm is a high quality composite strain image which can effectively visualize both solid and fluid filled breast lesions in addition to the speckle content of the fluid filled lesions for their discrimination. The performance of our algorithm is evaluated using the in vivo patient data and compared with recently reported techniques. The results show that both the solid and fluid filled lesions can be better visualized using our technique and the fluid filled lesions can be classified with good accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Tight-frame based iterative image reconstruction for spectral breast CT

    PubMed Central

    Zhao, Bo; Gao, Hao; Ding, Huanjun; Molloi, Sabee

    2013-01-01

    Purpose: To investigate tight-frame based iterative reconstruction (TFIR) technique for spectral breast computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The experimental data were acquired with a fan-beam breast CT system based on a cadmium zinc telluride photon-counting detector. The images were reconstructed with a varying number of projections using the TFIR and filtered backprojection (FBP) techniques. The image quality between these two techniques was evaluated. The image's spatial resolution was evaluated using a high-resolution phantom, and the contrast to noise ratio (CNR) was evaluated using a postmortem breast sample. The postmortem breast samples were decomposed into water, lipid, and protein contents based on images reconstructed from TFIR with 204 projections and FBP with 614 projections. The volumetric fractions of water, lipid, and protein from the image-based measurements in both TFIR and FBP were compared to the chemical analysis. Results: The spatial resolution and CNR were comparable for the images reconstructed by TFIR with 204 projections and FBP with 614 projections. Both reconstruction techniques provided accurate quantification of water, lipid, and protein composition of the breast tissue when compared with data from the reference standard chemical analysis. Conclusions: Accurate breast tissue decomposition can be done with three fold fewer projection images by the TFIR technique without any reduction in image spatial resolution and CNR. This can result in a two-third reduction of the patient dose in a multislit and multislice spiral CT system in addition to the reduced scanning time in this system. PMID:23464320

  9. The use of an image registration technique in the urban growth monitoring

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Foresti, C.; Deoliveira, M. D. L. N.; Niero, M.; Parreira, E. M. D. M. F.

    1984-01-01

    The use of an image registration program in the studies of urban growth is described. This program permits a quick identification of growing areas with the overlap of the same scene in different periods, and with the use of adequate filters. The city of Brasilia, Brazil, is selected for the test area. The dynamics of Brasilia urban growth are analyzed with the overlap of scenes dated June 1973, 1978 and 1983. The results showed the utilization of the image registration technique for the monitoring of dynamic urban growth.

  10. Laser applications and system considerations in ocular imaging

    PubMed Central

    Elsner, Ann E.; Muller, Matthew S.

    2009-01-01

    We review laser applications for primarily in vivo ocular imaging techniques, describing their constraints based on biological tissue properties, safety, and the performance of the imaging system. We discuss the need for cost effective sources with practical wavelength tuning capabilities for spectral studies. Techniques to probe the pathological changes of layers beneath the highly scattering retina and diagnose the onset of various eye diseases are described. The recent development of several optical coherence tomography based systems for functional ocular imaging is reviewed, as well as linear and nonlinear ocular imaging techniques performed with ultrafast lasers, emphasizing recent source developments and methods to enhance imaging contrast. PMID:21052482

  11. Scoping Study of Machine Learning Techniques for Visualization and Analysis of Multi-source Data in Nuclear Safeguards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yonggang

    In implementation of nuclear safeguards, many different techniques are being used to monitor operation of nuclear facilities and safeguard nuclear materials, ranging from radiation detectors, flow monitors, video surveillance, satellite imagers, digital seals to open source search and reports of onsite inspections/verifications. Each technique measures one or more unique properties related to nuclear materials or operation processes. Because these data sets have no or loose correlations, it could be beneficial to analyze the data sets together to improve the effectiveness and efficiency of safeguards processes. Advanced visualization techniques and machine-learning based multi-modality analysis could be effective tools in such integratedmore » analysis. In this project, we will conduct a survey of existing visualization and analysis techniques for multi-source data and assess their potential values in nuclear safeguards.« less

  12. Exploring access to scientific literature using content-based image retrieval

    NASA Astrophysics Data System (ADS)

    Deserno, Thomas M.; Antani, Sameer; Long, Rodney

    2007-03-01

    The number of articles published in the scientific medical literature is continuously increasing, and Web access to the journals is becoming common. Databases such as SPIE Digital Library, IEEE Xplore, indices such as PubMed, and search engines such as Google provide the user with sophisticated full-text search capabilities. However, information in images and graphs within these articles is entirely disregarded. In this paper, we quantify the potential impact of using content-based image retrieval (CBIR) to access this non-text data. Based on the Journal Citations Report (JCR), the journal Radiology was selected for this study. In 2005, 734 articles were published electronically in this journal. This included 2,587 figures, which yields a rate of 3.52 figures per article. Furthermore, 56.4% of these figures are composed of several individual panels, i.e. the figure combines different images and/or graphs. According to the Image Cross-Language Evaluation Forum (ImageCLEF), the error rate of automatic identification of medical images is about 15%. Therefore, it is expected that, by applying ImageCLEF-like techniques, already 95.5% of articles could be retrieved by means of CBIR. The challenge for CBIR in scientific literature, however, is the use of local texture properties to analyze individual image panels in composite illustrations. Using local features for content-based image representation, 8.81 images per article are available, and the predicted correctness rate may increase to 98.3%. From this study, we conclude that CBIR may have a high impact in medical literature research and suggest that additional research in this area is warranted.

  13. Automatic quantitative analysis of in-stent restenosis using FD-OCT in vivo intra-arterial imaging.

    PubMed

    Mandelias, Kostas; Tsantis, Stavros; Spiliopoulos, Stavros; Katsakiori, Paraskevi F; Karnabatidis, Dimitris; Nikiforidis, George C; Kagadis, George C

    2013-06-01

    A new segmentation technique is implemented for automatic lumen area extraction and stent strut detection in intravascular optical coherence tomography (OCT) images for the purpose of quantitative analysis of in-stent restenosis (ISR). In addition, a user-friendly graphical user interface (GUI) is developed based on the employed algorithm toward clinical use. Four clinical datasets of frequency-domain OCT scans of the human femoral artery were analyzed. First, a segmentation method based on fuzzy C means (FCM) clustering and wavelet transform (WT) was applied toward inner luminal contour extraction. Subsequently, stent strut positions were detected by utilizing metrics derived from the local maxima of the wavelet transform into the FCM membership function. The inner lumen contour and the position of stent strut were extracted with high precision. Compared to manual segmentation by an expert physician, the automatic lumen contour delineation had an average overlap value of 0.917 ± 0.065 for all OCT images included in the study. The strut detection procedure achieved an overall accuracy of 93.80% and successfully identified 9.57 ± 0.5 struts for every OCT image. Processing time was confined to approximately 2.5 s per OCT frame. A new fast and robust automatic segmentation technique combining FCM and WT for lumen border extraction and strut detection in intravascular OCT images was designed and implemented. The proposed algorithm integrated in a GUI represents a step forward toward the employment of automated quantitative analysis of ISR in clinical practice.

  14. Guided wave localization of damage via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Levine, Ross M.; Michaels, Jennifer E.; Lee, Sang Jun

    2012-05-01

    Ultrasonic guided waves are frequently applied for structural health monitoring and nondestructive evaluation of plate-like metallic and composite structures. Spatially distributed arrays of fixed piezoelectric transducers can be used to detect damage by recording and analyzing all pairwise signal combinations. By subtracting pre-recorded baseline signals, the effects due to scatterer interactions can be isolated. Given these residual signals, techniques such as delay-and-sum imaging are capable of detecting flaws, but do not exploit the expected sparse nature of damage. It is desired to determine the location of a possible flaw by leveraging the anticipated sparsity of damage; i.e., most of the structure is assumed to be damage-free. Unlike least-squares methods, L1-norm minimization techniques favor sparse solutions to inverse problems such as the one considered here of locating damage. Using this type of method, it is possible to exploit sparsity of damage by formulating the imaging process as an optimization problem. A model-based damage localization method is presented that simultaneously decomposes all scattered signals into location-based signal components. The method is first applied to simulated data to investigate sensitivity to both model mismatch and additive noise, and then to experimental data recorded from an aluminum plate with artificial damage. Compared to delay-and-sum imaging, results exhibit a significant reduction in both spot size and imaging artifacts when the model is reasonably well-matched to the data.

  15. Recent Progress in Optical Biosensors Based on Smartphone Platforms

    PubMed Central

    Geng, Zhaoxin; Zhang, Xiong; Fan, Zhiyuan; Lv, Xiaoqing; Su, Yue; Chen, Hongda

    2017-01-01

    With a rapid improvement of smartphone hardware and software, especially complementary metal oxide semiconductor (CMOS) cameras, many optical biosensors based on smartphone platforms have been presented, which have pushed the development of the point-of-care testing (POCT). Imaging-based and spectrometry-based detection techniques have been widely explored via different approaches. Combined with the smartphone, imaging-based and spectrometry-based methods are currently used to investigate a wide range of molecular properties in chemical and biological science for biosensing and diagnostics. Imaging techniques based on smartphone-based microscopes are utilized to capture microscale analysts, while spectrometry-based techniques are used to probe reactions or changes of molecules. Here, we critically review the most recent progress in imaging-based and spectrometry-based smartphone-integrated platforms that have been developed for chemical experiments and biological diagnosis. We focus on the analytical performance and the complexity for implementation of the platforms. PMID:29068375

  16. Recent Progress in Optical Biosensors Based on Smartphone Platforms.

    PubMed

    Geng, Zhaoxin; Zhang, Xiong; Fan, Zhiyuan; Lv, Xiaoqing; Su, Yue; Chen, Hongda

    2017-10-25

    With a rapid improvement of smartphone hardware and software, especially complementary metal oxide semiconductor (CMOS) cameras, many optical biosensors based on smartphone platforms have been presented, which have pushed the development of the point-of-care testing (POCT). Imaging-based and spectrometry-based detection techniques have been widely explored via different approaches. Combined with the smartphone, imaging-based and spectrometry-based methods are currently used to investigate a wide range of molecular properties in chemical and biological science for biosensing and diagnostics. Imaging techniques based on smartphone-based microscopes are utilized to capture microscale analysts, while spectrometry-based techniques are used to probe reactions or changes of molecules. Here, we critically review the most recent progress in imaging-based and spectrometry-based smartphone-integrated platforms that have been developed for chemical experiments and biological diagnosis. We focus on the analytical performance and the complexity for implementation of the platforms.

  17. CytoSpectre: a tool for spectral analysis of oriented structures on cellular and subcellular levels.

    PubMed

    Kartasalo, Kimmo; Pölönen, Risto-Pekka; Ojala, Marisa; Rasku, Jyrki; Lekkala, Jukka; Aalto-Setälä, Katriina; Kallio, Pasi

    2015-10-26

    Orientation and the degree of isotropy are important in many biological systems such as the sarcomeres of cardiomyocytes and other fibrillar structures of the cytoskeleton. Image based analysis of such structures is often limited to qualitative evaluation by human experts, hampering the throughput, repeatability and reliability of the analyses. Software tools are not readily available for this purpose and the existing methods typically rely at least partly on manual operation. We developed CytoSpectre, an automated tool based on spectral analysis, allowing the quantification of orientation and also size distributions of structures in microscopy images. CytoSpectre utilizes the Fourier transform to estimate the power spectrum of an image and based on the spectrum, computes parameter values describing, among others, the mean orientation, isotropy and size of target structures. The analysis can be further tuned to focus on targets of particular size at cellular or subcellular scales. The software can be operated via a graphical user interface without any programming expertise. We analyzed the performance of CytoSpectre by extensive simulations using artificial images, by benchmarking against FibrilTool and by comparisons with manual measurements performed for real images by a panel of human experts. The software was found to be tolerant against noise and blurring and superior to FibrilTool when analyzing realistic targets with degraded image quality. The analysis of real images indicated general good agreement between computational and manual results while also revealing notable expert-to-expert variation. Moreover, the experiment showed that CytoSpectre can handle images obtained of different cell types using different microscopy techniques. Finally, we studied the effect of mechanical stretching on cardiomyocytes to demonstrate the software in an actual experiment and observed changes in cellular orientation in response to stretching. CytoSpectre, a versatile, easy-to-use software tool for spectral analysis of microscopy images was developed. The tool is compatible with most 2D images and can be used to analyze targets at different scales. We expect the tool to be useful in diverse applications dealing with structures whose orientation and size distributions are of interest. While designed for the biological field, the software could also be useful in non-biological applications.

  18. Skin Temperature Recording with Phosphors

    PubMed Central

    Lawson, Ray N.; Alt, Leslie L.

    1965-01-01

    New knowledge of temperature irregularities associated with various disease states has resulted in increasing interest in the recording of heat radiation from the human body. Infrared radiation from the skin is a surface phenomenon and the amount of such radiation increases with temperature. Previous recording techniques have been not only crude but difficult and expensive. An unconventional thermal imaging system is described which gives superior temperature patterns and is also simpler and cheaper than any of the other available procedures. This system is based on the employment of thermally sensitive phosphors which glow when exposed to ultraviolet illumination, in inverse proportion to the underlying temperature. The thermal image can be directly observed or more critically analyzed and photographed on a simple closed-circuit television monitor. ImagesFig. 3Fig. 3Fig. 4Fig. 5Fig. 6 PMID:14270208

  19. Cell culture imaging using microimpedance tomography.

    PubMed

    Linderholm, Pontus; Marescot, Laurent; Loke, Meng Heng; Renaud, Philippe

    2008-01-01

    We present a novel, inexpensive, and fast microimpedance tomography system for two-dimensional imaging of cell and tissue cultures. The system is based on four-electrode measurements using 16 planar microelectrodes (5 microm x 4 mm) integrated into a culture chamber. An Agilent 4294A impedance analyzer combined with a front-end amplifier is used for the impedance measurements. Two-dimensional images are obtained using a reconstruction algorithm. This system is capable of accurately resolving the shape and position of a human hair, yielding vertical cross sections of the object. Human epithelial stem cells (YF 29) are also grown directly on the device surface. Tissue growth can be followed over several days. A rapid resistivity decrease caused by permeabilized cell membranes is also monitored, suggesting that this technique can be used in electroporation studies.

  20. Automatic Tracking Of Remote Sensing Precipitation Data Using Genetic Algorithm Image Registration Based Automatic Morphing: September 1999 Storm Floyd Case Study

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Vongsaard, J.; El-Ghazawi, T.; Weinman, J.; Yang, R.; Kafatos, M.

    U Due to the poor temporal sampling by satellites, data gaps exist in satellite derived time series of precipitation. This poses a challenge for assimilating rain- fall data into forecast models. To yield a continuous time series, the classic image processing technique of digital image morphing has been used. However, the digital morphing technique was applied manually and that is time consuming. In order to avoid human intervention in the process, an automatic procedure for image morphing is needed for real-time operations. For this purpose, Genetic Algorithm Based Image Registration Automatic Morphing (GRAM) model was developed and tested in this paper. Specifically, automatic morphing technique was integrated with Genetic Algo- rithm and Feature Based Image Metamorphosis technique to fill in data gaps between satellite coverage. The technique was tested using NOWRAD data which are gener- ated from the network of NEXRAD radars. Time series of NOWRAD data from storm Floyd that occurred at the US eastern region on September 16, 1999 for 00:00, 01:00, 02:00,03:00, and 04:00am were used. The GRAM technique was applied to data col- lected at 00:00 and 04:00am. These images were also manually morphed. Images at 01:00, 02:00 and 03:00am were interpolated from the GRAM and manual morphing and compared with the original NOWRAD rainrates. The results show that the GRAM technique outperforms manual morphing. The correlation coefficients between the im- ages generated using manual morphing are 0.905, 0.900, and 0.905 for the images at 01:00, 02:00,and 03:00 am, while the corresponding correlation coefficients are 0.946, 0.911, and 0.913, respectively, based on the GRAM technique. Index terms ­ Remote Sensing, Image Registration, Hydrology, Genetic Algorithm, Morphing, NEXRAD

  1. MASS SPECTROMETRY IMAGING FOR DRUGS AND METABOLITES

    PubMed Central

    Greer, Tyler; Sturm, Robert; Li, Lingjun

    2011-01-01

    Mass spectrometric imaging (MSI) is a powerful analytical technique that provides two- and three-dimensional spatial maps of multiple compounds in a single experiment. This technique has been routinely applied to protein, peptide, and lipid molecules with much less research reporting small molecule distributions, especially pharmaceutical drugs. This review’s main focus is to provide readers with an up-to-date description of the substrates and compounds that have been analyzed for drug and metabolite composition using MSI technology. Additionally, ionization techniques, sample preparation, and instrumentation developments are discussed. PMID:21515430

  2. An assessment of multimodal imaging of subsurface text in mummy cartonnage using surrogate papyrus phantoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, Adam; Piquette, Kathryn E.; Bergmann, Uwe

    Ancient Egyptian mummies were often covered with an outer casing, panels and masks made from cartonnage: a lightweight material made from linen, plaster, and recycled papyrus held together with adhesive. Egyptologists, papyrologists, and historians aim to recover and read extant text on the papyrus contained within cartonnage layers, but some methods, such as dissolving mummy casings, are destructive. The use of an advanced range of different imaging modalities was investigated to test the feasibility of non-destructive approaches applied to multi-layered papyrus found in ancient Egyptian mummy cartonnage. Eight different techniques were compared by imaging four synthetic phantoms designed to providemore » robust, well-understood, yet relevant sample standards using modern papyrus and replica inks. The techniques include optical (multispectral imaging with reflection and transillumination, and optical coherence tomography), X-ray (X-ray fluorescence imaging, X-ray fluorescence spectroscopy, X-ray micro computed tomography and phase contrast X-ray) and terahertz-based approaches. Optical imaging techniques were able to detect inks on all four phantoms, but were unable to significantly penetrate papyrus. X-ray-based techniques were sensitive to iron-based inks with excellent penetration but were not able to detect carbon-based inks. However, using terahertz imaging, it was possible to detect carbon-based inks with good penetration but with less sensitivity to iron-based inks. The phantoms allowed reliable and repeatable tests to be made at multiple sites on three continents. Finally, the tests demonstrated that each imaging modality needs to be optimised for this particular application: it is, in general, not sufficient to repurpose an existing device without modification. Furthermore, it is likely that no single imaging technique will to be able to robustly detect and enable the reading of text within ancient Egyptian mummy cartonnage. However, by carefully selecting, optimising and combining techniques, text contained within these fragile and rare artefacts may eventually be open to non-destructive imaging, identification, and interpretation.« less

  3. An assessment of multimodal imaging of subsurface text in mummy cartonnage using surrogate papyrus phantoms

    DOE PAGES

    Gibson, Adam; Piquette, Kathryn E.; Bergmann, Uwe; ...

    2018-02-26

    Ancient Egyptian mummies were often covered with an outer casing, panels and masks made from cartonnage: a lightweight material made from linen, plaster, and recycled papyrus held together with adhesive. Egyptologists, papyrologists, and historians aim to recover and read extant text on the papyrus contained within cartonnage layers, but some methods, such as dissolving mummy casings, are destructive. The use of an advanced range of different imaging modalities was investigated to test the feasibility of non-destructive approaches applied to multi-layered papyrus found in ancient Egyptian mummy cartonnage. Eight different techniques were compared by imaging four synthetic phantoms designed to providemore » robust, well-understood, yet relevant sample standards using modern papyrus and replica inks. The techniques include optical (multispectral imaging with reflection and transillumination, and optical coherence tomography), X-ray (X-ray fluorescence imaging, X-ray fluorescence spectroscopy, X-ray micro computed tomography and phase contrast X-ray) and terahertz-based approaches. Optical imaging techniques were able to detect inks on all four phantoms, but were unable to significantly penetrate papyrus. X-ray-based techniques were sensitive to iron-based inks with excellent penetration but were not able to detect carbon-based inks. However, using terahertz imaging, it was possible to detect carbon-based inks with good penetration but with less sensitivity to iron-based inks. The phantoms allowed reliable and repeatable tests to be made at multiple sites on three continents. Finally, the tests demonstrated that each imaging modality needs to be optimised for this particular application: it is, in general, not sufficient to repurpose an existing device without modification. Furthermore, it is likely that no single imaging technique will to be able to robustly detect and enable the reading of text within ancient Egyptian mummy cartonnage. However, by carefully selecting, optimising and combining techniques, text contained within these fragile and rare artefacts may eventually be open to non-destructive imaging, identification, and interpretation.« less

  4. Soft-tissue and phase-contrast imaging at the Swiss Light Source

    NASA Astrophysics Data System (ADS)

    Schneider, Philipp; Mohan, Nishant; Stampanoni, Marco; Muller, Ralph

    2004-05-01

    Recent results show that bone vasculature is a major contributor to local tissue porosity, and therefore can be directly linked to the mechanical properties of bone tissue. With the advent of third generation synchrotron radiation (SR) sources, micro-computed tomography (μCT) with resolutions in the order of 1 μm and better has become feasible. This technique has been employed frequently to analyze trabecular architecture and local bone tissue properties, i.e. the hard or mineralized bone tissue. Nevertheless, less is known about the soft tissues in bone, mainly due to inadequate imaging capabilities. Here, we discuss three different methods and applications to visualize soft tissues. The first approach is referred to as negative imaging. In this case the material around the soft tissue provides the absorption contrast necessary for X-ray based tomography. Bone vasculature from two different mouse strains was investigated and compared qualitatively. Differences were observed in terms of local vessel number and vessel orientation. The second technique represents corrosion casting, which is principally adapted for imaging of vascular systems. The technique of corrosion casting has already been applied successfully at the Swiss Light Source. Using the technology we were able to show that pathological features reminiscent of Alzheimer"s disease could be distinguished in the brain vasculature of APP transgenic mice. The third technique discussed here is phase contrast imaging exploiting the high degree of coherence of third generation synchrotron light sources, which provide the necessary physical conditions for phase contrast. The in-line approach followed here for phase contrast retrieval is a modification of the Gerchberg-Saxton-Fienup type. Several measurements and theoretical thoughts concerning phase contrast imaging are presented, including mathematical phase retrieval. Although up-to-now only phase images have been computed, the approach is now ready to retrieve the phase for a large number of angular positions of the specimen allowing application of holotomography, which is the three-dimensional reconstruction of phase images.

  5. Research on feature extraction techniques of Hainan Li brocade pattern

    NASA Astrophysics Data System (ADS)

    Zhou, Yuping; Chen, Fuqiang; Zhou, Yuhua

    2016-03-01

    Hainan Li brocade skills has been listed as world non-material cultural heritage preservation, therefore, the research on Hainan Li brocade patterns plays an important role in Li brocade culture inheritance. The meaning of Li brocade patterns was analyzed and the shape feature extraction techniques to original Li brocade patterns were advanced in this paper, based on the contour tracking algorithm. First, edge detection was made on the design patterns, and then the morphological closing operation was used to smooth the image, and finally contour tracking was used to extract the outer contours of Li brocade patterns. The extracted contour features were processed by means of morphology, and digital characteristics of contours are obtained by invariant moments. At last, different patterns of Li brocade design are briefly analyzed according to the digital characteristics. The results showed that the pattern extraction method to Li brocade pattern shapes is feasible and effective according to above method.

  6. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    NASA Astrophysics Data System (ADS)

    Kurien, Binoy George

    Optical interferometry is a technique for obtaining high-resolution imagery of a distant target by interfering light from multiple telescopes. Image restoration from interferometric measurements poses a unique set of challenges. The first challenge is that the measurement set provides only a sparse-sampling of the object's Fourier Transform and hence image formation from these measurements is an inherently ill-posed inverse problem. Secondly, atmospheric turbulence causes severe distortion of the phase of the Fourier samples. We develop array design conditions for unique Fourier phase recovery, as well as a comprehensive algorithmic framework based on the notion of redundant-spaced-calibration (RSC), which together achieve reliable image reconstruction in spite of these challenges. Within this framework, we see that classical interferometric observables such as the bispectrum and closure phase can limit sensitivity, and that generalized notions of these observables can improve both theoretical and empirical performance. Our framework leverages techniques from lattice theory to resolve integer phase ambiguities in the interferometric phase measurements, and from graph theory, to select a reliable set of generalized observables. We analyze the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures and corroborate this analysis with simulation results. We apply techniques from the field of compressed sensing to perform image reconstruction from the estimates of the object's Fourier coefficients. The end result is a comprehensive strategy to achieve well-posed and easily-predictable reconstruction performance in optical interferometry.

  7. A real time quality control application for animal production by image processing.

    PubMed

    Sungur, Cemil; Özkan, Halil

    2015-11-01

    Standards of hygiene and health are of major importance in food production, and quality control has become obligatory in this field. Thanks to rapidly developing technologies, it is now possible for automatic and safe quality control of food production. For this purpose, image-processing-based quality control systems used in industrial applications are being employed to analyze the quality of food products. In this study, quality control of chicken (Gallus domesticus) eggs was achieved using a real time image-processing technique. In order to execute the quality control processes, a conveying mechanism was used. Eggs passing on a conveyor belt were continuously photographed in real time by cameras located above the belt. The images obtained were processed by various methods and techniques. Using digital instrumentation, the volume of the eggs was measured, broken/cracked eggs were separated and dirty eggs were determined. In accordance with international standards for classifying the quality of eggs, the class of separated eggs was determined through a fuzzy implication model. According to tests carried out on thousands of eggs, a quality control process with an accuracy of 98% was possible. © 2014 Society of Chemical Industry.

  8. On the possibility of producing true real-time retinal cross-sectional images using a graphics processing unit enhanced master-slave optical coherence tomography system.

    PubMed

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian

    2015-07-01

    In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling.

  9. Intra- and inter-rater reliability of digital image analysis for skin color measurement

    PubMed Central

    Sommers, Marilyn; Beacham, Barbara; Baker, Rachel; Fargo, Jamison

    2013-01-01

    Background We determined the intra- and inter-rater reliability of data from digital image color analysis between an expert and novice analyst. Methods Following training, the expert and novice independently analyzed 210 randomly ordered images. Both analysts used Adobe® Photoshop lasso or color sampler tools based on the type of image file. After color correction with Pictocolor® in camera software, they recorded L*a*b* (L*=light/dark; a*=red/green; b*=yellow/blue) color values for all skin sites. We computed intra-rater and inter-rater agreement within anatomical region, color value (L*, a*, b*), and technique (lasso, color sampler) using a series of one-way intra-class correlation coefficients (ICCs). Results Results of ICCs for intra-rater agreement showed high levels of internal consistency reliability within each rater for the lasso technique (ICC ≥ 0.99) and somewhat lower, yet acceptable, level of agreement for the color sampler technique (ICC = 0.91 for expert, ICC = 0.81 for novice). Skin L*, skin b*, and labia L* values reached the highest level of agreement (ICC ≥ 0.92) and skin a*, labia b*, and vaginal wall b* were the lowest (ICC ≥ 0.64). Conclusion Data from novice analysts can achieve high levels of agreement with data from expert analysts with training and the use of a detailed, standard protocol. PMID:23551208

  10. Intra- and inter-rater reliability of digital image analysis for skin color measurement.

    PubMed

    Sommers, Marilyn; Beacham, Barbara; Baker, Rachel; Fargo, Jamison

    2013-11-01

    We determined the intra- and inter-rater reliability of data from digital image color analysis between an expert and novice analyst. Following training, the expert and novice independently analyzed 210 randomly ordered images. Both analysts used Adobe(®) Photoshop lasso or color sampler tools based on the type of image file. After color correction with Pictocolor(®) in camera software, they recorded L*a*b* (L*=light/dark; a*=red/green; b*=yellow/blue) color values for all skin sites. We computed intra-rater and inter-rater agreement within anatomical region, color value (L*, a*, b*), and technique (lasso, color sampler) using a series of one-way intra-class correlation coefficients (ICCs). Results of ICCs for intra-rater agreement showed high levels of internal consistency reliability within each rater for the lasso technique (ICC ≥ 0.99) and somewhat lower, yet acceptable, level of agreement for the color sampler technique (ICC = 0.91 for expert, ICC = 0.81 for novice). Skin L*, skin b*, and labia L* values reached the highest level of agreement (ICC ≥ 0.92) and skin a*, labia b*, and vaginal wall b* were the lowest (ICC ≥ 0.64). Data from novice analysts can achieve high levels of agreement with data from expert analysts with training and the use of a detailed, standard protocol. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Monitoring of surface velocity of hyper-concentrated flow in a laboratory flume by means of fully-digital PIV

    NASA Astrophysics Data System (ADS)

    Termini, Donatella; Di Leonardo, Alice

    2016-04-01

    High flow conditions, which are generally characterized by high sediment concentrations, do not permit the use of traditional measurement equipment. Traditional techniques usually are based on the intrusive measure of the vertical profile of flow velocity and on the linking of water depth with the discharge through the rating curve. The major disadvantage of these measurement techniques is that they are difficult to use and not safe for operators especially in high flow conditions. The point is that, as literature shows (see as an example Moramarco and Termini, 2015), especially in such conditions, the measurement of surface velocity distribution is important to evaluate the mean flow velocity and, thus, the flow discharge. In the last decade, image-based techniques have been increasingly used for surface velocity measurements (among others Joeau et al., 2008). Experimental program has been recently conducted at the Hydraulic laboratory of the Department of Civil, Environmental, Aerospatial and of Materials Engineering (DICAM) - University of Palermo (Italy) in order to analyze the propagation phenomenon of hyper-concentrated flow in a defense channel. The experimental apparatus includes a high-precision camera and a system allowing the images recording. This paper investigates the utility and the efficiency of the digital image-technique for remote monitoring of surface velocity in hyper-concentrated flow by the aid of data collected during experiments conducted in the laboratory flume. In particular the present paper attention is focused on the estimation procedure of the velocity vectors and on their sensitivity with parameters (number of images, spatial resolution of interrogation area,) of the images processing procedure. References Jodeau M., A. Hauet, A. Paquier, Le Coz J., Dramais G., Application and evaluation of LS-PIV technique for the monitoring of river surface in high flow conditions, Flow Measurements and Instrumentation, Vol.19, No.2, 2008, pp.117-127. Moramarco T., Termini D., Entropic approach to estimate the mean flow velocity: experimental investigation in laboratory flumes, Environmental Fluid mechanics, Vol. 15, No.1, 2015.

  12. Content based image retrieval using local binary pattern operator and data mining techniques.

    PubMed

    Vatamanu, Oana Astrid; Frandeş, Mirela; Lungeanu, Diana; Mihalaş, Gheorghe-Ioan

    2015-01-01

    Content based image retrieval (CBIR) concerns the retrieval of similar images from image databases, using feature vectors extracted from images. These feature vectors globally define the visual content present in an image, defined by e.g., texture, colour, shape, and spatial relations between vectors. Herein, we propose the definition of feature vectors using the Local Binary Pattern (LBP) operator. A study was performed in order to determine the optimum LBP variant for the general definition of image feature vectors. The chosen LBP variant is then subsequently used to build an ultrasound image database, and a database with images obtained from Wireless Capsule Endoscopy. The image indexing process is optimized using data clustering techniques for images belonging to the same class. Finally, the proposed indexing method is compared to the classical indexing technique, which is nowadays widely used.

  13. Ultra-low field nuclear magnetic resonance and magnetic resonance imaging to discriminate and identify materials

    DOEpatents

    Kraus, Robert H.; Matlashov, Andrei N.; Espy, Michelle A.; Volegov, Petr L.

    2010-03-30

    An ultra-low magnetic field NMR system can non-invasively examine containers. Database matching techniques can then identify hazardous materials within the containers. Ultra-low field NMR systems are ideal for this purpose because they do not require large powerful magnets and because they can examine materials enclosed in conductive shells such as lead shells. The NMR examination technique can be combined with ultra-low field NMR imaging, where an NMR image is obtained and analyzed to identify target volumes. Spatial sensitivity encoding can also be used to identify target volumes. After the target volumes are identified the NMR measurement technique can be used to identify their contents.

  14. Elemental mapping in a contemporary miniature by full-field X-ray fluorescence imaging with gaseous detector vs. scanning X-ray fluorescence imaging with polycapillary optics

    NASA Astrophysics Data System (ADS)

    Silva, A. L. M.; Cirino, S.; Carvalho, M. L.; Manso, M.; Pessanha, S.; Azevedo, C. D. R.; Carramate, L. F. N. D.; Santos, J. P.; Guerra, M.; Veloso, J. F. C. A.

    2017-03-01

    Energy dispersive X-ray imaging can be used in several research fields and industrial applications. Elemental mapping through energy dispersive X-ray imaging technique has become a promising method to obtain positional distribution of specific elements in a non-destructive way. To obtain the elemental distribution of a sample it is necessary to use instruments capable of providing a precise positioning together with a good energy resolution. Polycapillary beams together with silicon drift chamber detectors are used in several commercial systems and are considered state-of-the-art spectrometers, however they are usually very costly. A new concept of large energy dispersive X-ray imaging systems based on gaseous radiation detectors emerged in the last years enabling a promising 2D elemental detection at a very reduced price. The main goal of this work is to analyze a contemporary Indian miniature with both X-ray fluorescence imaging systems, the one based on a gaseous detector 2D-THCOBRA and the state-of-the-art spectrometer M4 Tornado, from Bruker. The performance of both systems is compared and evaluated in the context of the sample's analysis.

  15. Fractal-Based Image Analysis In Radiological Applications

    NASA Astrophysics Data System (ADS)

    Dellepiane, S.; Serpico, S. B.; Vernazza, G.; Viviani, R.

    1987-10-01

    We present some preliminary results of a study aimed to assess the actual effectiveness of fractal theory and to define its limitations in the area of medical image analysis for texture description, in particular, in radiological applications. A general analysis to select appropriate parameters (mask size, tolerance on fractal dimension estimation, etc.) has been performed on synthetically generated images of known fractal dimensions. Moreover, we analyzed some radiological images of human organs in which pathological areas can be observed. Input images were subdivided into blocks of 6x6 pixels; then, for each block, the fractal dimension was computed in order to create fractal images whose intensity was related to the D value, i.e., texture behaviour. Results revealed that the fractal images could point out the differences between normal and pathological tissues. By applying histogram-splitting segmentation to the fractal images, pathological areas were isolated. Two different techniques (i.e., the method developed by Pentland and the "blanket" method) were employed to obtain fractal dimension values, and the results were compared; in both cases, the appropriateness of the fractal description of the original images was verified.

  16. NMR-based diffusion pore imaging.

    PubMed

    Laun, Frederik Bernd; Kuder, Tristan Anselm; Wetscherek, Andreas; Stieltjes, Bram; Semmler, Wolfhard

    2012-08-01

    Nuclear magnetic resonance (NMR) diffusion experiments offer a unique opportunity to study boundaries restricting the diffusion process. In a recent Letter [Phys. Rev. Lett. 107, 048102 (2011)], we introduced the idea and concept that such diffusion experiments can be interpreted as NMR imaging experiments. Consequently, images of closed pores, in which the spins diffuse, can be acquired. In the work presented here, an in-depth description of the diffusion pore imaging technique is provided. Image artifacts due to gradient profiles of finite duration, field inhomogeneities, and surface relaxation are considered. Gradients of finite duration lead to image blurring and edge enhancement artifacts. Field inhomogeneities have benign effects on diffusion pore images, and surface relaxation can lead to a shrinkage and shift of the pore image. The relation between boundary structure and the imaginary part of the diffusion weighted signal is analyzed, and it is shown that information on pore coherence can be obtained without the need to measure the phase of the diffusion weighted signal. Moreover, it is shown that quite arbitrary gradient profiles can be used for diffusion pore imaging. The matrices required for numerical calculations are stated and provided as supplemental material.

  17. Investigation of the effects of metal-wire resonators in sub-wavelength array based on time-reversal technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tu, Hui-Lin, E-mail: tuhl-uestc@163.com, E-mail: xiaoshaoqiu@uestc.edu.cn; Xiao, Shao-Qiu, E-mail: tuhl-uestc@163.com, E-mail: xiaoshaoqiu@uestc.edu.cn

    The resonant metalens consisting of metal-wire resonators with equally finite length can break the diffraction barrier well suited for super-resolution imaging. In this study, a basic combination constructed by two metal-wire resonators with different lengths is proposed, and its resonant characteristics is analyzed using the method of moments (MoM). Based on the time reversal (TR) technique, this kind of combination can be applied to a sub-wavelength two-element antenna array with a 1/40-wavelength interval to make the elements work simultaneously with little interference in the frequency band of 1.0-1.5 GHz and 1.5-2.0 GHz, respectively. The simulations and experiments show that analysismore » of MoM and the application of the resonators can be used to design multi-frequency sub-wavelength antenna arrays efficiently. This general design method is convenient and can be used for many applications, such as weakening jamming effectiveness in communication systems, and sub-wavelength imaging in a broad frequency band.« less

  18. FPGA-based architecture for motion recovering in real-time

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel; Maya-Rueda, Selene E.; Torres-Huitzil, Cesar

    2002-03-01

    A key problem in the computer vision field is the measurement of object motion in a scene. The main goal is to compute an approximation of the 3D motion from the analysis of an image sequence. Once computed, this information can be used as a basis to reach higher level goals in different applications. Motion estimation algorithms pose a significant computational load for the sequential processors limiting its use in practical applications. In this work we propose a hardware architecture for motion estimation in real time based on FPGA technology. The technique used for motion estimation is Optical Flow due to its accuracy, and the density of velocity estimation, however other techniques are being explored. The architecture is composed of parallel modules working in a pipeline scheme to reach high throughput rates near gigaflops. The modules are organized in a regular structure to provide a high degree of flexibility to cover different applications. Some results will be presented and the real-time performance will be discussed and analyzed. The architecture is prototyped in an FPGA board with a Virtex device interfaced to a digital imager.

  19. A video-based speed estimation technique for localizing the wireless capsule endoscope inside gastrointestinal tract.

    PubMed

    Bao, Guanqun; Mi, Liang; Geng, Yishuang; Zhou, Mingda; Pahlavan, Kaveh

    2014-01-01

    Wireless Capsule Endoscopy (WCE) is progressively emerging as one of the most popular non-invasive imaging tools for gastrointestinal (GI) tract inspection. As a critical component of capsule endoscopic examination, physicians need to know the precise position of the endoscopic capsule in order to identify the position of intestinal disease. For the WCE, the position of the capsule is defined as the linear distance it is away from certain fixed anatomical landmarks. In order to measure the distance the capsule has traveled, a precise knowledge of how fast the capsule moves is urgently needed. In this paper, we present a novel computer vision based speed estimation technique that is able to extract the speed of the endoscopic capsule by analyzing the displacements between consecutive frames. The proposed approach is validated using a virtual testbed as well as the real endoscopic images. Results show that the proposed method is able to precisely estimate the speed of the endoscopic capsule with 93% accuracy on average, which enhances the localization accuracy of the WCE to less than 2.49 cm.

  20. Imaging quality analysis of computer-generated holograms using the point-based method and slice-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen; Chen, Siqing; Zheng, Huadong; Sun, Tao; Yu, Yingjie; Gao, Hongyue; Asundi, Anand K.

    2017-06-01

    Computer holography has made a notably progress in recent years. The point-based method and slice-based method are chief calculation algorithms for generating holograms in holographic display. Although both two methods are validated numerically and optically, the differences of the imaging quality of these methods have not been specifically analyzed. In this paper, we analyze the imaging quality of computer-generated phase holograms generated by point-based Fresnel zone plates (PB-FZP), point-based Fresnel diffraction algorithm (PB-FDA) and slice-based Fresnel diffraction algorithm (SB-FDA). The calculation formula and hologram generation with three methods are demonstrated. In order to suppress the speckle noise, sequential phase-only holograms are generated in our work. The results of reconstructed images numerically and experimentally are also exhibited. By comparing the imaging quality, the merits and drawbacks with three methods are analyzed. Conclusions are given by us finally.

  1. Robust Global Image Registration Based on a Hybrid Algorithm Combining Fourier and Spatial Domain Techniques

    DTIC Science & Technology

    2012-09-01

    Robust global image registration based on a hybrid algorithm combining Fourier and spatial domain techniques Peter N. Crabtree, Collin Seanor...00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Robust global image registration based on a hybrid algorithm combining Fourier and spatial domain...demonstrate performance of a hybrid algorithm . These results are from analysis of a set of images of an ISO 12233 [12] resolution chart captured in the

  2. Advanced imaging technologies for mapping cadaveric lymphatic anatomy: magnetic resonance and computed tomography lymphangiography.

    PubMed

    Pan, W R; Rozen, W M; Stretch, J; Thierry, B; Ashton, M W; Corlett, R J

    2008-09-01

    Lymphatic anatomy has become increasingly clinically important as surgical techniques evolve for investigating and treating cancer metastases. However, due to limited anatomical techniques available, research in this field has been insufficient. The techniques of computed tomography (CT) and magnetic resonance (MR) lymphangiography have not been described previously in the imaging of cadaveric lymphatic anatomy. This preliminary work describes the feasibility of these advanced imaging technologies for imaging lymphatic anatomy. A single, fresh cadaveric lower limb underwent lymphatic dissection and cannulation utilizing microsurgical techniques. Contrast materials for both CT and MR studies were chosen based on their suitability for subsequent clinical use, and imaging was undertaken with a view to mapping lymphatic anatomy. Microdissection studies were compared with imaging findings in each case. Both MR-based and CT-based contrast media in current clinical use were found to be suitable for demonstrating cadaveric lymphatic anatomy upon direct intralymphatic injection. MR lymphangiography and CT lymphangiography are feasible modalities for cadaveric anatomical research for lymphatic anatomy. Future studies including refinements in scanning techniques may offer these technologies to the clinical setting.

  3. Fusion of UAV photogrammetry and digital optical granulometry for detection of structural changes in floodplains

    NASA Astrophysics Data System (ADS)

    Langhammer, Jakub; Lendzioch, Theodora; Mirijovsky, Jakub

    2016-04-01

    Granulometric analysis represents a traditional, important and for the description of sedimentary material substantial method with various applications in sedimentology, hydrology and geomorphology. However, the conventional granulometric field survey methods are time consuming, laborious, costly and are invasive to the surface being sampled, which can be limiting factor for their applicability in protected areas.. The optical granulometry has recently emerged as an image analysis technique, enabling non-invasive survey, employing semi-automated identification of clasts from calibrated digital imagery, taken on site by conventional high resolution digital camera and calibrated frame. The image processing allows detection and measurement of mixed size natural grains, their sorting and quantitative analysis using standard granulometric approaches. Despite known limitations, the technique today presents reliable tool, significantly easing and speeding the field survey in fluvial geomorphology. However, the nature of such survey has still limitations in spatial coverage of the sites and applicability in research at multitemporal scale. In our study, we are presenting novel approach, based on fusion of two image analysis techniques - optical granulometry and UAV-based photogrammetry, allowing to bridge the gap between the needs of high resolution structural information for granulometric analysis and spatially accurate and data coverage. We have developed and tested a workflow that, using UAV imaging platform enabling to deliver seamless, high resolution and spatially accurate imagery of the study site from which can be derived the granulometric properties of the sedimentary material. We have set up a workflow modeling chain, providing (i) the optimum flight parameters for UAV imagery to balance the two key divergent requirements - imagery resolution and seamless spatial coverage, (ii) the workflow for the processing of UAV acquired imagery by means of the optical granulometry and (iii) the workflow for analysis of spatial distribution and temporal changes of granulometric properties across the point bar. The proposed technique was tested on a case study of an active point bar of mid-latitude mountain stream at Sumava mountains, Czech Republic, exposed to repeated flooding. The UAV photogrammetry was used to acquire very high resolution imagery to build high-precision digital terrain models and orthoimage. The orthoimage was then analyzed using the digital optical granulometric tool BaseGrain. This approach allowed us (i) to analyze the spatial distribution of the grain size in a seamless transects over an active point bar and (ii) to assess the multitemporal changes of granulometric properties of the point bar material resulting from flooding. The tested framework prove the applicability of the proposed method for granulometric analysis with accuracy comparable with field optical granulometry. The seamless nature of the data enables to study spatial distribution of granulometric properties across the study sites as well as the analysis of multitemporal changes, resulting from repeated imaging.

  4. Integrated analysis of remote sensing products from basic geological surveys. [Brazil

    NASA Technical Reports Server (NTRS)

    Dasilvafagundesfilho, E. (Principal Investigator)

    1984-01-01

    Recent advances in remote sensing led to the development of several techniques to obtain image information. These techniques as effective tools in geological maping are analyzed. A strategy for optimizing the images in basic geological surveying is presented. It embraces as integrated analysis of spatial, spectral, and temporal data through photoptic (color additive viewer) and computer processing at different scales, allowing large areas survey in a fast, precise, and low cost manner.

  5. Mammalian Cardiovascular Patterning as Determined by Hemodynamic Forces and Blood Vessel Genetics

    NASA Astrophysics Data System (ADS)

    Anderson, Gregory Arthur

    Cardiovascular development is a process that involves the timing of multiple molecular events, and numerous subtle three-dimensional conformational changes. Traditional developmental biology techniques have provided large quantities of information as to how these complex organ systems develop. However, the major drawback of the majority of current developmental biological imaging is that they are two-dimensional in nature. It is now well recognized that circulation of blood is required for normal patterning and remodeling of blood vessels. Normal blood vessel formation is dependent upon a complex network of signaling pathways, and genetic mutations in these pathways leads to impaired vascular development, heart failure, and lethality. As such, it is not surprising that mutant mice with aberrant cardiovascular patterning are so common, since normal development requires proper coordination between three systems: the heart, the blood, and the vasculature. This thesis describes the implementation of a three-dimensional imaging technique, optical projection tomography (OPT), in conjunction with a computer-based registration algorithm to statistically analyze developmental differences in groups of wild-type mouse embryos. Embryos that differ by only a few hours' gestational time are shown to have developmental differences in blood vessel formation and heart development progression that can be discerned. This thesis describes how we analyzed mouse models of cardiovascular perturbation by OPT to detect morphological differences in embryonic development in both qualitative and quantitative ways. Both a blood vessel specific mutation and a cardiac specific mutation were analyzed, providing evidence that developmental defects of these types can be quantified. Finally, we describe the implementation of OPT imaging to identify statistically significant phenotypes from three different mouse models of cardiovascular perturbation across a range of developmental time points. Image registration methods, combined with intensity- and deformation-based analyses are described and utilized to fully characterize myosin light chain 2a (Mlc2a), delta-like ligand 4 (Dll4), and Endoglin (Eng) mutant mouse embryos. We show that Eng mutant embryos are statistically similar to the Mlc2a phenotype, confirming that these mouse mutants suffer from a primary cardiac developmental defect. Thus, a loss of hemodynamic force caused by defective pumping of the heart is the primary developmental defect affecting these mice.

  6. Trends in Correlation-Based Pattern Recognition and Tracking in Forward-Looking Infrared Imagery

    PubMed Central

    Alam, Mohammad S.; Bhuiyan, Sharif M. A.

    2014-01-01

    In this paper, we review the recent trends and advancements on correlation-based pattern recognition and tracking in forward-looking infrared (FLIR) imagery. In particular, we discuss matched filter-based correlation techniques for target detection and tracking which are widely used for various real time applications. We analyze and present test results involving recently reported matched filters such as the maximum average correlation height (MACH) filter and its variants, and distance classifier correlation filter (DCCF) and its variants. Test results are presented for both single/multiple target detection and tracking using various real-life FLIR image sequences. PMID:25061840

  7. A forester's look at the application of image manipulation techniques to multitemporal Landsat data

    NASA Technical Reports Server (NTRS)

    Williams, D. L.; Stauffer, M. L.; Leung, K. C.

    1979-01-01

    Registered, multitemporal Landsat data of a study area in central Pennsylvania were analyzed to detect and assess changes in the forest canopy resulting from insect defoliation. Images taken July 19, 1976, and June 27, 1977, were chosen specifically to represent forest canopy conditions before and after defoliation, respectively. Several image manipulation and data transformation techniques, developed primarily for estimating agricultural and rangeland standing green biomass, were applied to these data. The applicability of each technique for estimating the severity of forest canopy defoliation was then evaluated. All techniques tested had highly correlated results. In all cases, heavy defoliation was discriminated from healthy forest. Areas of moderate defoliation were confused with healthy forest on northwest (NW) aspects, but were distinct from healthy forest conditions on southeast (SE)-facing slopes.

  8. Simulation tools for analyzer-based x-ray phase contrast imaging system with a conventional x-ray source

    NASA Astrophysics Data System (ADS)

    Caudevilla, Oriol; Zhou, Wei; Stoupin, Stanislav; Verman, Boris; Brankov, J. G.

    2016-09-01

    Analyzer-based X-ray phase contrast imaging (ABI) belongs to a broader family of phase-contrast (PC) X-ray imaging modalities. Unlike the conventional X-ray radiography, which measures only X-ray absorption, in PC imaging one can also measures the X-rays deflection induced by the object refractive properties. It has been shown that refraction imaging provides better contrast when imaging the soft tissue, which is of great interest in medical imaging applications. In this paper, we introduce a simulation tool specifically designed to simulate the analyzer-based X-ray phase contrast imaging system with a conventional polychromatic X-ray source. By utilizing ray tracing and basic physical principles of diffraction theory our simulation tool can predicting the X-ray beam profile shape, the energy content, the total throughput (photon count) at the detector. In addition we can evaluate imaging system point-spread function for various system configurations.

  9. Automated Track Recognition and Event Reconstruction in Nuclear Emulsion

    NASA Technical Reports Server (NTRS)

    Deines-Jones, P.; Cherry, M. L.; Dabrowska, A.; Holynski, R.; Jones, W. V.; Kolganova, E. D.; Kudzia, D.; Nilsen, B. S.; Olszewski, A.; Pozharova, E. A.; hide

    1998-01-01

    The major advantages of nuclear emulsion for detecting charged particles are its submicron position resolution and sensitivity to minimum ionizing particles. These must be balanced, however, against the difficult manual microscope measurement by skilled observers required for the analysis. We have developed an automated system to acquire and analyze the microscope images from emulsion chambers. Each emulsion plate is analyzed independently, allowing coincidence techniques to be used in order to reject back- ground and estimate error rates. The system has been used to analyze a sample of high-multiplicity Pb-Pb interactions (charged particle multiplicities approx. 1100) produced by the 158 GeV/c per nucleon Pb-208 beam at CERN. Automatically reconstructed track lists agree with our best manual measurements to 3%. We describe the image analysis and track reconstruction techniques, and discuss the measurement and reconstruction uncertainties.

  10. Geometry Processing of Conventionally Produced Mouse Brain Slice Images.

    PubMed

    Agarwal, Nitin; Xu, Xiangmin; Gopi, M

    2018-04-21

    Brain mapping research in most neuroanatomical laboratories relies on conventional processing techniques, which often introduce histological artifacts such as tissue tears and tissue loss. In this paper we present techniques and algorithms for automatic registration and 3D reconstruction of conventionally produced mouse brain slices in a standardized atlas space. This is achieved first by constructing a virtual 3D mouse brain model from annotated slices of Allen Reference Atlas (ARA). Virtual re-slicing of the reconstructed model generates ARA-based slice images corresponding to the microscopic images of histological brain sections. These image pairs are aligned using a geometric approach through contour images. Histological artifacts in the microscopic images are detected and removed using Constrained Delaunay Triangulation before performing global alignment. Finally, non-linear registration is performed by solving Laplace's equation with Dirichlet boundary conditions. Our methods provide significant improvements over previously reported registration techniques for the tested slices in 3D space, especially on slices with significant histological artifacts. Further, as one of the application we count the number of neurons in various anatomical regions using a dataset of 51 microscopic slices from a single mouse brain. To the best of our knowledge the presented work is the first that automatically registers both clean as well as highly damaged high-resolutions histological slices of mouse brain to a 3D annotated reference atlas space. This work represents a significant contribution to this subfield of neuroscience as it provides tools to neuroanatomist for analyzing and processing histological data. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Use of zerotree coding in a high-speed pyramid image multiresolution decomposition

    NASA Astrophysics Data System (ADS)

    Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo

    1995-03-01

    A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.

  12. Alpha trimmed correlation for touchless finger image mosaicing

    NASA Astrophysics Data System (ADS)

    Rao, Shishir P.; Rajendran, Rahul; Agaian, Sos S.; Mulawka, Marzena Mary Ann

    2016-05-01

    In this paper, a novel technique to mosaic multiview contactless finger images is presented. This technique makes use of different correlation methods, such as, the Alpha-trimmed correlation, Pearson's correlation [1], Kendall's correlation [2], and Spearman's correlation [2], to combine multiple views of the finger. The key contributions of the algorithm are: 1) stitches images more accurately, 2) provides better image fusion effects, 3) has better visual effect on the overall image, and 4) is more reliable. The extensive computer simulations show that the proposed method produces better or comparable stitched images than several state-of-the-art methods, such as those presented by Feng Liu [3], K Choi [4], H Choi [5], and G Parziale [6]. In addition, we also compare various correlation techniques with the correlation method mentioned in [3] and analyze the output. In the future, this method can be extended to obtain a 3D model of the finger using multiple views of the finger, and help in generating scenic panoramic images and underwater 360-degree panoramas.

  13. Thermal Characterization of Defects in Aircraft Structures Via Spatially Controlled Heat Application

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott; Winfree, William P.

    1997-01-01

    Recent advances in thermal imaging technology have spawned a number of new thermal NDE techniques that provide quantitative information about flaws in aircraft structures. Thermography has a number of advantages as an inspection technique. It is a totally noncontacting, nondestructive, imaging technology capable of inspecting a large area in a matter of a few seconds. The development of fast, inexpensive image processors have aided in the attractiveness of thermography as an NDE technique. These image processors have increased the signal to noise ratio of thermography and facilitated significant advances in post-processing. The resulting digital images enable archival records for comparison with later inspections thus providing a means of monitoring the evolution of damage in a particular structure. The National Aeronautics and Space Administration's Langley Research Center has developed a thermal NDE technique designed to image a number of potential flaws in aircraft structures. The technique involves injecting a small, spatially controlled heat flux into the outer surface of an aircraft. Images of fatigue cracking, bond integrity and material loss due to corrosion are generated from measurements of the induced surface temperature variations. This paper will present a discussion of the development of the thermal imaging system as well as the techniques used to analyze the resulting thermal images. Spatial tailoring of the heat coupled with the analysis techniques represent a significant improvement in the delectability of flaws over conventional thermal imaging. Results of laboratory experiments on fabricated crack, disbond and material loss samples will be presented to demonstrate the capabilities of the technique. An integral part of the development of this technology is the use of analytic and computational modeling. The experimental results will be compared with these models to demonstrate the utility of such an approach.

  14. Current applications of molecular imaging and luminescence-based techniques in traditional Chinese medicine.

    PubMed

    Li, Jinhui; Wan, Haitong; Zhang, Hong; Tian, Mei

    2011-09-01

    Traditional Chinese medicine (TCM), which is fundamentally different from Western medicine, has been widely investigated using various approaches. Cellular- or molecular-based imaging has been used to investigate and illuminate the various challenges identified and progress made using therapeutic methods in TCM. Insight into the processes of TCM at the cellular and molecular changes and the ability to image these processes will enhance our understanding of various diseases of TCM and will provide new tools to diagnose and treat patients. Various TCM therapies including herbs and formulations, acupuncture and moxibustion, massage, Gua Sha, and diet therapy have been analyzed using positron emission tomography, single photon emission computed tomography, functional magnetic resonance imaging and ultrasound and optical imaging. These imaging tools have kept pace with developments in molecular biology, nuclear medicine, and computer technology. We provide an overview of recent developments in demystifying ancient knowledge - like the power of energy flow and blood flow meridians, and serial naturopathies - which are essential to visually and vividly recognize the body using modern technology. In TCM, treatment can be individualized in a holistic or systematic view that is consistent with molecular imaging technologies. Future studies might include using molecular imaging in conjunction with TCM to easily diagnose or monitor patients naturally and noninvasively. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. Validation of a Novel Technique and Evaluation of the Surface Free Energy of Food

    PubMed Central

    Senturk Parreidt, Tugce; Schmid, Markus; Hauser, Carolin

    2017-01-01

    Characterizing the physical properties of a surface is largely dependent on determining the contact angle exhibited by a liquid. Contact angles on the surfaces of rough and irregularly-shaped food samples are difficult to measure using a contact angle meter (goniometer). As a consequence, values for the surface energy and its components can be mismeasured. The aim of this work was to use a novel contact angle measurement method, namely the snake-based ImageJ program, to accurately measure the contact angles of rough and irregular shapes, such as food samples, and so enable more accurate calculation of the surface energy of food materials. In order to validate the novel technique, the contact angles of three different test liquids on four different smooth polymer films were measured using both the ImageJ software with the DropSnake plugin and the widely used contact angle meter. The distributions of the values obtained by the two methods were different. Therefore, the contact angles, surface energies, and polar and dispersive components of plastic films obtained using the ImageJ program and the Drop Shape Analyzer (DSA) were interpreted with the help of simple linear regression analysis. As case studies, the superficial characteristics of strawberry and endive salad epicarp were measured with the ImageJ program and the results were interpreted with the Drop Shape Analyzer equivalent according to our regression models. The data indicated that the ImageJ program can be successfully used for contact angle determination of rough and strongly hydrophobic surfaces, such as strawberry epicarp. However, for the special geometry of droplets on slightly hydrophobic surfaces, such as salad leaves, the program code interpolation part can be altered. PMID:28425932

  16. X-ray modeling for SMILE

    NASA Astrophysics Data System (ADS)

    Sun, T.; Wang, C.; Wei, F.; Liu, Z. Q.; Zheng, J.; Yu, X. Z.; Sembay, S.; Branduardi-Raymont, G.

    2016-12-01

    SMILE (Solar wind Magnetosphere Ionosphere Link Explorer) is a novel mission to explore the coupling of the solar wind-magnetosphere-ionosphere system via providing global images of the magnetosphere and aurora. As the X-ray imaging is a brand new technique applied to study the large scale magnetopause, modeling of the solar wind charge exchange (SWCX) X-ray emissions in the magnetosheath and cusps is vital in various aspects: it helps the design of the Soft X-ray Imager (SXI) on SMILE, selection of satellite orbits, as well as the analysis of expected scientific outcomes. Based on the PPMLR-MHD code, we present the simulation results of the X-ray emissions in geospace during storm time. Both the polar orbit and the Molniya orbit are used. From the X-ray images of the magnetosheath and cusps, the magnetospheric responses to an interplanetary shock and IMF southward turning are analyzed.

  17. A user's guide to localization-based super-resolution fluorescence imaging.

    PubMed

    Dempsey, Graham T

    2013-01-01

    Advances in far-field fluorescence microscopy over the past decade have led to the development of super-resolution imaging techniques that provide more than an order of magnitude improvement in spatial resolution compared to conventional light microscopy. One such approach, called Stochastic Optical Reconstruction Microscopy (STORM) uses the sequential, nanometer-scale localization of individual fluorophores to reconstruct a high-resolution image of a structure of interest. This is an attractive method for biological investigation at the nanoscale due to its relative simplicity, both conceptually and practically in the laboratory. Like most research tools, however, the devil is in the details. The aim of this chapter is to serve as a guide for applying STORM to the study of biological samples. This chapter will discuss considerations for choosing a photoswitchable fluorescent probe, preparing a sample, selecting hardware for data acquisition, and collecting and analyzing data for image reconstruction. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Intelligent Color Vision System for Ripeness Classification of Oil Palm Fresh Fruit Bunch

    PubMed Central

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Halim, Zaini Abdul; Ibrahim, Haidi; Ali, Syed Salim Syed

    2012-01-01

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category. PMID:23202043

  19. Land cover mapping after the tsunami event over Nanggroe Aceh Darussalam (NAD) province, Indonesia

    NASA Astrophysics Data System (ADS)

    Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Alias, A. N.; Mohd. Saleh, N.; Wong, C. J.; Surbakti, M. S.

    2008-03-01

    Remote sensing offers an important means of detecting and analyzing temporal changes occurring in our landscape. This research used remote sensing to quantify land use/land cover changes at the Nanggroe Aceh Darussalam (Nad) province, Indonesia on a regional scale. The objective of this paper is to assess the changed produced from the analysis of Landsat TM data. A Landsat TM image was used to develop land cover classification map for the 27 March 2005. Four supervised classifications techniques (Maximum Likelihood, Minimum Distance-to- Mean, Parallelepiped and Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier) were performed to the satellite image. Training sites and accuracy assessment were needed for supervised classification techniques. The training sites were established using polygons based on the colour image. High detection accuracy (>80%) and overall Kappa (>0.80) were achieved by the Parallelepiped with Maximum Likelihood Classifier Tiebreaker classifier in this study. This preliminary study has produced a promising result. This indicates that land cover mapping can be carried out using remote sensing classification method of the satellite digital imagery.

  20. Intelligent color vision system for ripeness classification of oil palm fresh fruit bunch.

    PubMed

    Fadilah, Norasyikin; Mohamad-Saleh, Junita; Abdul Halim, Zaini; Ibrahim, Haidi; Syed Ali, Syed Salim

    2012-10-22

    Ripeness classification of oil palm fresh fruit bunches (FFBs) during harvesting is important to ensure that they are harvested during optimum stage for maximum oil production. This paper presents the application of color vision for automated ripeness classification of oil palm FFB. Images of oil palm FFBs of type DxP Yangambi were collected and analyzed using digital image processing techniques. Then the color features were extracted from those images and used as the inputs for Artificial Neural Network (ANN) learning. The performance of the ANN for ripeness classification of oil palm FFB was investigated using two methods: training ANN with full features and training ANN with reduced features based on the Principal Component Analysis (PCA) data reduction technique. Results showed that compared with using full features in ANN, using the ANN trained with reduced features can improve the classification accuracy by 1.66% and is more effective in developing an automated ripeness classifier for oil palm FFB. The developed ripeness classifier can act as a sensor in determining the correct oil palm FFB ripeness category.

  1. Designing intuitive dialog boxes in Windows environments

    NASA Astrophysics Data System (ADS)

    Souetova, Natalia

    2000-01-01

    There were analyzed some approaches to user interface design. Most existing interfaces seem to be difficult for understanding and studying for newcomers. There were defined some ways for designing interfaces based on psychology of computer image perception and experience got while working with artists and designers without special technique education. Some applications with standard Windows interfaces, based on these results, were developed. Windows environment was chosen because they are very popular now. This increased quality and speed of users' job and reduced quantity of troubles and mistakes. Now high-qualified employers do not spend their working time for explanation and help.

  2. Video multiple watermarking technique based on image interlacing using DWT.

    PubMed

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  3. Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images.

    PubMed

    Rajaraman, Sivaramakrishnan; Antani, Sameer K; Poostchi, Mahdieh; Silamut, Kamolrat; Hossain, Md A; Maude, Richard J; Jaeger, Stefan; Thoma, George R

    2018-01-01

    Malaria is a blood disease caused by the Plasmodium parasites transmitted through the bite of female Anopheles mosquito. Microscopists commonly examine thick and thin blood smears to diagnose disease and compute parasitemia. However, their accuracy depends on smear quality and expertise in classifying and counting parasitized and uninfected cells. Such an examination could be arduous for large-scale diagnoses resulting in poor quality. State-of-the-art image-analysis based computer-aided diagnosis (CADx) methods using machine learning (ML) techniques, applied to microscopic images of the smears using hand-engineered features demand expertise in analyzing morphological, textural, and positional variations of the region of interest (ROI). In contrast, Convolutional Neural Networks (CNN), a class of deep learning (DL) models promise highly scalable and superior results with end-to-end feature extraction and classification. Automated malaria screening using DL techniques could, therefore, serve as an effective diagnostic aid. In this study, we evaluate the performance of pre-trained CNN based DL models as feature extractors toward classifying parasitized and uninfected cells to aid in improved disease screening. We experimentally determine the optimal model layers for feature extraction from the underlying data. Statistical validation of the results demonstrates the use of pre-trained CNNs as a promising tool for feature extraction for this purpose.

  4. Single-Photon Detectors for Time-of-Flight Range Imaging

    NASA Astrophysics Data System (ADS)

    Stoppa, David; Simoni, Andrea

    We live in a three-dimensional (3D) world and thanks to the stereoscopic vision provided by our two eyes, in combination with the powerful neural network of the brain we are able to perceive the distance of the objects. Nevertheless, despite the huge market volume of digital cameras, solid-state image sensors can capture only a two-dimensional (2D) projection, of the scene under observation, losing a variable of paramount importance, i.e., the scene depth. On the contrary, 3D vision tools could offer amazing possibilities of improvement in many areas thanks to the increased accuracy and reliability of the models representing the environment. Among the great variety of distance measuring techniques and detection systems available, this chapter will treat only the emerging niche of solid-state, scannerless systems based on the TOF principle and using a detector SPAD-based pixels. The chapter is organized into three main parts. At first, TOF systems and measuring techniques will be described. In the second part, most meaningful sensor architectures for scannerless TOF distance measurements will be analyzed, focusing onto the circuital building blocks required by time-resolved image sensors. Finally, a performance summary is provided and a perspective view for the near future developments of SPAD-TOF sensors is given.

  5. SU-E-J-261: The Importance of Appropriate Image Preprocessing to Augment the Information of Radiomics Image Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L; Fried, D; Fave, X

    Purpose: To investigate how different image preprocessing techniques, their parameters, and the different boundary handling techniques can augment the information of features and improve feature’s differentiating capability. Methods: Twenty-seven NSCLC patients with a solid tumor volume and no visually obvious necrotic regions in the simulation CT images were identified. Fourteen of these patients had a necrotic region visible in their pre-treatment PET images (necrosis group), and thirteen had no visible necrotic region in the pre-treatment PET images (non-necrosis group). We investigated how image preprocessing can impact the ability of radiomics image features extracted from the CT to differentiate between twomore » groups. It is expected the histogram in the necrosis group is more negatively skewed, and the uniformity from the necrosis group is less. Therefore, we analyzed two first order features, skewness and uniformity, on the image inside the GTV in the intensity range [−20HU, 180HU] under the combination of several image preprocessing techniques: (1) applying the isotropic Gaussian or anisotropic diffusion smoothing filter with a range of parameter(Gaussian smoothing: size=11, sigma=0:0.1:2.3; anisotropic smoothing: iteration=4, kappa=0:10:110); (2) applying the boundaryadapted Laplacian filter; and (3) applying the adaptive upper threshold for the intensity range. A 2-tailed T-test was used to evaluate the differentiating capability of CT features on pre-treatment PT necrosis. Result: Without any preprocessing, no differences in either skewness or uniformity were observed between two groups. After applying appropriate Gaussian filters (sigma>=1.3) or anisotropic filters(kappa >=60) with the adaptive upper threshold, skewness was significantly more negative in the necrosis group(p<0.05). By applying the boundary-adapted Laplacian filtering after the appropriate Gaussian filters (0.5 <=sigma<=1.1) or anisotropic filters(20<=kappa <=50), the uniformity was significantly lower in the necrosis group (p<0.05). Conclusion: Appropriate selection of image preprocessing techniques allows radiomics features to extract more useful information and thereby improve prediction models based on these features.« less

  6. A real time study on condition monitoring of distribution transformer using thermal imager

    NASA Astrophysics Data System (ADS)

    Mariprasath, T.; Kirubakaran, V.

    2018-05-01

    The transformer is one of the critical apparatus in the power system. At any cost, a few minutes of outages harshly influence the power system. Hence, prevention-based maintenance technique is very essential. The continuous conditioning and monitoring technology significantly increases the life span of the transformer, as well as reduces the maintenance cost. Hence, conditioning and monitoring of transformer's temperature are very essential. In this paper, a critical review has been made on various conditioning and monitoring techniques. Furthermore, a new method, hot spot indication technique, is discussed. Also, transformer's operating condition is monitored by using thermal imager. From the thermal analysis, it is inferred that major hotspot locations are appearing at connection lead out; also, the bushing of the transformer is the very hottest spot in transformer, so monitoring the level of oil is essential. Alongside, real time power quality analysis has been carried out using the power analyzer. It shows that industrial drives are injecting current harmonics to the distribution network, which causes the power quality problem on the grid. Moreover, the current harmonic limit has exceeded the IEEE standard limit. Hence, the adequate harmonics suppression technique is need an hour.

  7. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G; Pan, X; Stayman, J

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less

  8. Femtosecond imaging of nonlinear acoustics in gold.

    PubMed

    Pezeril, Thomas; Klieber, Christoph; Shalagatskyi, Viktor; Vaudel, Gwenaelle; Temnov, Vasily; Schmidt, Oliver G; Makarov, Denys

    2014-02-24

    We have developed a high-sensitivity, low-noise femtosecond imaging technique based on pump-probe time-resolved measurements with a standard CCD camera. The approach used in the experiment is based on lock-in acquisitions of images generated by a femtosecond laser probe synchronized to modulation of a femtosecond laser pump at the same rate. This technique allows time-resolved imaging of laser-excited phenomena with femtosecond time resolution. We illustrate the technique by time-resolved imaging of the nonlinear reshaping of a laser-excited picosecond acoustic pulse after propagation through a thin gold layer. Image analysis reveals the direct 2D visualization of the nonlinear acoustic propagation of the picosecond acoustic pulse. Many ultrafast pump-probe investigations can profit from this technique because of the wealth of information it provides over a typical single diode and lock-in amplifier setup, for example it can be used to image ultrasonic echoes in biological samples.

  9. Study of pipe thickness loss using a neutron radiography method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohamed, Abdul Aziz; Wahab, Aliff Amiru Bin; Yazid, Hafizal B.

    2014-02-12

    The purpose of this preliminary work is to study for thickness changes in objects using neutron radiography. In doing the project, the technique for the radiography was studied. The experiment was done at NUR-2 facility at TRIGA research reactor in Malaysian Nuclear Agency, Malaysia. Test samples of varying materials were used in this project. The samples were radiographed using direct technique. Radiographic images were recorded using Nitrocellulose film. The films obtained were digitized to processed and analyzed. Digital processing is done on the images using software Isee!. The images were processed to produce better image for analysis. The thickness changesmore » in the image were measured to be compared with real thickness of the objects. From the data collected, percentages difference between measured and real thickness are below than 2%. This is considerably very low variation from original values. Therefore, verifying the neutron radiography technique used in this project.« less

  10. Glow discharge sources for atomic and molecular analyses

    NASA Astrophysics Data System (ADS)

    Storey, Andrew Patrick

    Two types of glow discharges were used and characterized for chemical analysis. The flowing atmospheric pressure afterglow (FAPA) source, based on a helium glow discharge (GD), was utilized to analyze samples with molecular mass spectrometry. A second GD, operated at reduced pressure in argon, was employed to map the elemental composition of a solid surface with novel optical detection systems, enabling new applications and perspectives for GD emission spectrometry. Like many plasma-based ambient desorption-ionization sources being used around the world, the FAPA requires a supply of helium to operate effectively. With increased pressures on global helium supply and pricing, the use of an interrupted stream of helium for analysis was explored for vapor and solid samples. In addition to the mass spectra generated by the FAPA source, schlieren imaging and infrared thermography were employed to map the behavior of the source and its surroundings under the altered conditions. Additionally, a new annular microplasma variation of the FAPA source was developed and characterized. A spectroscopic imaging system that utilized an adjustable-tilt interference filter was used to map the elemental composition of a sample surface by glow discharge emission spectroscopy. This apparatus was compared to other GD imaging techniques for mapping elemental surface composition. The wide bandpass filter resulted in significant spectral interferences that could be partially overcome with chemometric data processing. Because time-resolved GD emission spectroscopy can provide fine depth-profiling measurements, a natural extension of GD imaging would be its application to three-dimensional characterization of samples. However, the simultaneous cathodic sputtering that occur across the sample results in a sampling process that is not completely predictable. These issues are frequently encountered when laterally varied samples are explored with glow discharge imaging techniques. These insights are described with respect to their consequences for both imaging and conventional GD spectroscopic techniques.

  11. Diagnostic accuracy of ovarian cyst segmentation in B-mode ultrasound images

    NASA Astrophysics Data System (ADS)

    Bibicu, Dorin; Moraru, Luminita; Stratulat (Visan), Mirela

    2013-11-01

    Cystic and polycystic ovary syndrome is an endocrine disorder affecting women in the fertile age. The Moore Neighbor Contour, Watershed Method, Active Contour Models, and a recent method based on Active Contour Model with Selective Binary and Gaussian Filtering Regularized Level Set (ACM&SBGFRLS) techniques were used in this paper to detect the border of the ovarian cyst from echography images. In order to analyze the efficiency of the segmentation an original computer aided software application developed in MATLAB was proposed. The results of the segmentation were compared and evaluated against the reference contour manually delineated by a sonography specialist. Both the accuracy and time complexity of the segmentation tasks are investigated. The Fréchet distance (FD) as a similarity measure between two curves and the area error rate (AER) parameter as the difference between the segmented areas are used as estimators of the segmentation accuracy. In this study, the most efficient methods for the segmentation of the ovarian were analyzed cyst. The research was carried out on a set of 34 ultrasound images of the ovarian cyst.

  12. Printing line/space patterns on nonplanar substrates using a digital micromirror device-based point-array scanning technique

    NASA Astrophysics Data System (ADS)

    Kuo, Hung-Fei; Kao, Guan-Hsuan; Zhu, Liang-Xiu; Hung, Kuo-Shu; Lin, Yu-Hsin

    2018-02-01

    This study used a digital micromirror device (DMD) to produce point-array patterns and employed a self-developed optical system to define line-and-space patterns on nonplanar substrates. First, field tracing was employed to analyze the aerial images of the lithographic system, which comprised an optical system and the DMD. Multiobjective particle swarm optimization was then applied to determine the spot overlapping rate used. The objective functions were set to minimize linewidth and maximize image log slope, through which the dose of the exposure agent could be effectively controlled and the quality of the nonplanar lithography could be enhanced. Laser beams with 405-nm wavelength were employed as the light source. Silicon substrates coated with photoresist were placed on a nonplanar translation stage. The DMD was used to produce lithographic patterns, during which the parameters were analyzed and optimized. The optimal delay time-sequence combinations were used to scan images of the patterns. Finally, an exposure linewidth of less than 10 μm was successfully achieved using the nonplanar lithographic process.

  13. How to create a marketing strategy based on hospital characteristics that attract physicians.

    PubMed

    Nordstrom, R D; Horton, D E; Hatcher, M E

    1987-03-01

    Through use of multivariate statistical and research techniques, the authors analyzed 30 hospital features that contribute to a physician's image of a hospital as being a good or a poor place for patient admission and in which to practice. Use of the data obtained in this study can enable a hospital administrator to monitor changes in physicians' attitudes, plan strategies to encourage quality physicians to admit their patients, improve aspects perceived to be weak or unresponsive, and capitalize on strengths.

  14. Fusion of multiscale wavelet-based fractal analysis on retina image for stroke prediction.

    PubMed

    Che Azemin, M Z; Kumar, Dinesh K; Wong, T Y; Wang, J J; Kawasaki, R; Mitchell, P; Arjunan, Sridhar P

    2010-01-01

    In this paper, we present a novel method of analyzing retinal vasculature using Fourier Fractal Dimension to extract the complexity of the retinal vasculature enhanced at different wavelet scales. Logistic regression was used as a fusion method to model the classifier for 5-year stroke prediction. The efficacy of this technique has been tested using standard pattern recognition performance evaluation, Receivers Operating Characteristics (ROC) analysis and medical prediction statistics, odds ratio. Stroke prediction model was developed using the proposed system.

  15. Digital image processing for photo-reconnaissance applications

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1972-01-01

    Digital image-processing techniques developed for processing pictures from NASA space vehicles are analyzed in terms of enhancement, quantitative restoration, and information extraction. Digital filtering, and the action of a high frequency filter in the real and Fourier domain are discussed along with color and brightness.

  16. A COMPUTER MODEL OF LUNG MORPHOLOGY TO ANALYZE SPECT IMAGES

    EPA Science Inventory

    Measurement of the three-dimensional (3-D) spatial distribution of aerosol deposition can be performed using Single Photon Emission Computed Tomography (SPECT). The advantage of using 3-D techniques over planar gamma imaging is that deposition patterns can be related to real lun...

  17. Restoration of out-of-focus images based on circle of confusion estimate

    NASA Astrophysics Data System (ADS)

    Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto

    2002-11-01

    In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.

  18. Mapping Antiretroviral Drugs in Tissue by IR-MALDESI MSI Coupled to the Q Exactive and Comparison with LC-MS/MS SRM Assay

    NASA Astrophysics Data System (ADS)

    Barry, Jeremy A.; Robichaud, Guillaume; Bokhart, Mark T.; Thompson, Corbin; Sykes, Craig; Kashuba, Angela D. M.; Muddiman, David C.

    2014-12-01

    This work describes the coupling of the IR-MALDESI imaging source with the Q Exactive mass spectrometer. IR-MALDESI MSI was used to elucidate the spatial distribution of several HIV drugs in cervical tissues that had been incubated in either a low or high concentration. Serial sections of those analyzed by IR-MALDESI MSI were homogenized and analyzed by LC-MS/MS to quantify the amount of each drug present in the tissue. By comparing the two techniques, an agreement between the average intensities from the imaging experiment and the absolute quantities for each drug was observed. This correlation between these two techniques serves as a prerequisite to quantitative IR-MALDESI MSI. In addition, a targeted MS2 imaging experiment was also conducted to demonstrate the capabilities of the Q Exactive and to highlight the added selectivity that can be obtained with SRM or MRM imaging experiments.

  19. Hollow Polypropylene Yarns as a Biomimetic Brain Phantom for the Validation of High-Definition Fiber Tractography Imaging.

    PubMed

    Guise, Catarina; Fernandes, Margarida M; Nóbrega, João M; Pathak, Sudhir; Schneider, Walter; Fangueiro, Raul

    2016-11-09

    Current brain imaging methods largely fail to provide detailed information about the location and severity of axonal injuries and do not anticipate recovery of the patients with traumatic brain injury. High-definition fiber tractography appears as a novel imaging modality based on water motion in the brain that allows for direct visualization and quantification of the degree of axons damage, thus predicting the functional deficits due to traumatic axonal injury and loss of cortical projections. This neuroimaging modality still faces major challenges because it lacks a "gold standard" for the technique validation and respective quality control. The present work aims to study the potential of hollow polypropylene yarns to mimic human white matter axons and construct a brain phantom for the calibration and validation of brain diffusion techniques based on magnetic resonance imaging, including high-definition fiber tractography imaging. Hollow multifilament polypropylene yarns were produced by melt-spinning process and characterized in terms of their physicochemical properties. Scanning electronic microscopy images of the filaments cross section has shown an inner diameter of approximately 12 μm, confirming their appropriateness to mimic the brain axons. The chemical purity of polypropylene yarns as well as the interaction between the water and the filament surface, important properties for predicting water behavior and diffusion inside the yarns, were also evaluated. Restricted and hindered water diffusion was confirmed by fluorescence microscopy. Finally, the yarns were magnetic resonance imaging scanned and analyzed using high-definition fiber tractography, revealing an excellent choice of these hollow polypropylene structures for simulation of the white matter brain axons and their suitability for constructing an accurate brain phantom.

  20. Image processing and recognition for biological images.

    PubMed

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  1. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques.

    PubMed

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-12-01

    Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications.

  2. Quality Improvement of Liver Ultrasound Images Using Fuzzy Techniques

    PubMed Central

    Bayani, Azadeh; Langarizadeh, Mostafa; Radmard, Amir Reza; Nejad, Ahmadreza Farzaneh

    2016-01-01

    Background: Liver ultrasound images are so common and are applied so often to diagnose diffuse liver diseases like fatty liver. However, the low quality of such images makes it difficult to analyze them and diagnose diseases. The purpose of this study, therefore, is to improve the contrast and quality of liver ultrasound images. Methods: In this study, a number of image contrast enhancement algorithms which are based on fuzzy logic were applied to liver ultrasound images - in which the view of kidney is observable - using Matlab2013b to improve the image contrast and quality which has a fuzzy definition; just like image contrast improvement algorithms using a fuzzy intensification operator, contrast improvement algorithms applying fuzzy image histogram hyperbolization, and contrast improvement algorithms by fuzzy IF-THEN rules. Results: With the measurement of Mean Squared Error and Peak Signal to Noise Ratio obtained from different images, fuzzy methods provided better results, and their implementation - compared with histogram equalization method - led both to the improvement of contrast and visual quality of images and to the improvement of liver segmentation algorithms results in images. Conclusion: Comparison of the four algorithms revealed the power of fuzzy logic in improving image contrast compared with traditional image processing algorithms. Moreover, contrast improvement algorithm based on a fuzzy intensification operator was selected as the strongest algorithm considering the measured indicators. This method can also be used in future studies on other ultrasound images for quality improvement and other image processing and analysis applications. PMID:28077898

  3. Quantitative Image Analysis Techniques with High-Speed Schlieren Photography

    NASA Technical Reports Server (NTRS)

    Pollard, Victoria J.; Herron, Andrew J.

    2017-01-01

    Optical flow visualization techniques such as schlieren and shadowgraph photography are essential to understanding fluid flow when interpreting acquired wind tunnel test data. Output of the standard implementations of these visualization techniques in test facilities are often limited only to qualitative interpretation of the resulting images. Although various quantitative optical techniques have been developed, these techniques often require special equipment or are focused on obtaining very precise and accurate data about the visualized flow. These systems are not practical in small, production wind tunnel test facilities. However, high-speed photography capability has become a common upgrade to many test facilities in order to better capture images of unsteady flow phenomena such as oscillating shocks and flow separation. This paper describes novel techniques utilized by the authors to analyze captured high-speed schlieren and shadowgraph imagery from wind tunnel testing for quantification of observed unsteady flow frequency content. Such techniques have applications in parametric geometry studies and in small facilities where more specialized equipment may not be available.

  4. Sensor image prediction techniques

    NASA Astrophysics Data System (ADS)

    Stenger, A. J.; Stone, W. R.; Berry, L.; Murray, T. J.

    1981-02-01

    The preparation of prediction imagery is a complex, costly, and time consuming process. Image prediction systems which produce a detailed replica of the image area require the extensive Defense Mapping Agency data base. The purpose of this study was to analyze the use of image predictions in order to determine whether a reduced set of more compact image features contains enough information to produce acceptable navigator performance. A job analysis of the navigator's mission tasks was performed. It showed that the cognitive and perceptual tasks he performs during navigation are identical to those performed for the targeting mission function. In addition, the results of the analysis of his performance when using a particular sensor can be extended to the analysis of this mission tasks using any sensor. An experimental approach was used to determine the relationship between navigator performance and the type of amount of information in the prediction image. A number of subjects were given image predictions containing varying levels of scene detail and different image features, and then asked to identify the predicted targets in corresponding dynamic flight sequences over scenes of cultural, terrain, and mixed (both cultural and terrain) content.

  5. Symmetric encryption algorithms using chaotic and non-chaotic generators: A review

    PubMed Central

    Radwan, Ahmed G.; AbdElHaleem, Sherif H.; Abd-El-Hafiz, Salwa K.

    2015-01-01

    This paper summarizes the symmetric image encryption results of 27 different algorithms, which include substitution-only, permutation-only or both phases. The cores of these algorithms are based on several discrete chaotic maps (Arnold’s cat map and a combination of three generalized maps), one continuous chaotic system (Lorenz) and two non-chaotic generators (fractals and chess-based algorithms). Each algorithm has been analyzed by the correlation coefficients between pixels (horizontal, vertical and diagonal), differential attack measures, Mean Square Error (MSE), entropy, sensitivity analyses and the 15 standard tests of the National Institute of Standards and Technology (NIST) SP-800-22 statistical suite. The analyzed algorithms include a set of new image encryption algorithms based on non-chaotic generators, either using substitution only (using fractals) and permutation only (chess-based) or both. Moreover, two different permutation scenarios are presented where the permutation-phase has or does not have a relationship with the input image through an ON/OFF switch. Different encryption-key lengths and complexities are provided from short to long key to persist brute-force attacks. In addition, sensitivities of those different techniques to a one bit change in the input parameters of the substitution key as well as the permutation key are assessed. Finally, a comparative discussion of this work versus many recent research with respect to the used generators, type of encryption, and analyses is presented to highlight the strengths and added contribution of this paper. PMID:26966561

  6. Use of multidimensional, multimodal imaging and PACS to support neurological diagnoses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, S.T.C.; Knowlton, R.; Hoo, K.S.

    1995-12-31

    Technological advances in brain imaging have revolutionized diagnosis in neurology and neurological surgery. Major imaging techniques include magnetic resonance imaging (MRI) to visualize structural anatomy, positron emission tomography (PET) to image metabolic function and cerebral blood flow, magnetoencephalography (MEG) to visualize the location of physiologic current sources, and magnetic resonance spectroscopy (MRS) to measure specific biochemicals. Each of these techniques studies different biomedical aspects of the grain, but there lacks an effective means to quantify and correlate the disparate imaging datasets in order to improve clinical decision making processes. This paper describes several techniques developed in a UNIX-based neurodiagnostic workstationmore » to aid the non-invasive presurgical evaluation of epilepsy patients. These techniques include on-line access to the picture archiving and communication systems (PACS) multimedia archive, coregistration of multimodality image datasets, and correlation and quantitative of structural and functional information contained in the registered images. For illustration, the authors describe the use of these techniques in a patient case of non-lesional neocortical epilepsy. They also present the future work based on preliminary studies.« less

  7. Cassette Series Designed for Live-Cell Imaging of Proteins and High Resolution Techniques in Yeast

    PubMed Central

    Young, Carissa L.; Raden, David L.; Caplan, Jeffrey; Czymmek, Kirk; Robinson, Anne S.

    2012-01-01

    During the past decade, it has become clear that protein function and regulation are highly dependent upon intracellular localization. Although fluorescent protein variants are ubiquitously used to monitor protein dynamics, localization, and abundance; fluorescent light microscopy techniques often lack the resolution to explore protein heterogeneity and cellular ultrastructure. Several approaches have been developed to identify, characterize, and monitor the spatial localization of proteins and complexes at the sub-organelle level; yet, many of these techniques have not been applied to yeast. Thus, we have constructed a series of cassettes containing codon-optimized epitope tags, fluorescent protein variants that cover the full spectrum of visible light, a TetCys motif used for FlAsH-based localization, and the first evaluation in yeast of a photoswitchable variant – mEos2 – to monitor discrete subpopulations of proteins via confocal microscopy. This series of modules, complete with six different selection markers, provides the optimal flexibility during live-cell imaging and multicolor labeling in vivo. Furthermore, high-resolution imaging techniques include the yeast-enhanced TetCys motif that is compatible with diaminobenzidine photooxidation used for protein localization by electron microscopy and mEos2 that is ideal for super-resolution microscopy. We have examined the utility of our cassettes by analyzing all probes fused to the C-terminus of Sec61, a polytopic membrane protein of the endoplasmic reticulum of moderate protein concentration, in order to directly compare fluorescent probes, their utility and technical applications. Our series of cassettes expand the repertoire of molecular tools available to advance targeted spatiotemporal investigations using multiple live-cell, super-resolution or electron microscopy imaging techniques. PMID:22473760

  8. Remote Safety Monitoring for Elderly Persons Based on Omni-Vision Analysis

    PubMed Central

    Xiang, Yun; Tang, Yi-ping; Ma, Bao-qing; Yan, Hang-chen; Jiang, Jun; Tian, Xu-yuan

    2015-01-01

    Remote monitoring service for elderly persons is important as the aged populations in most developed countries continue growing. To monitor the safety and health of the elderly population, we propose a novel omni-directional vision sensor based system, which can detect and track object motion, recognize human posture, and analyze human behavior automatically. In this work, we have made the following contributions: (1) we develop a remote safety monitoring system which can provide real-time and automatic health care for the elderly persons and (2) we design a novel motion history or energy images based algorithm for motion object tracking. Our system can accurately and efficiently collect, analyze, and transfer elderly activity information and provide health care in real-time. Experimental results show that our technique can improve the data analysis efficiency by 58.5% for object tracking. Moreover, for the human posture recognition application, the success rate can reach 98.6% on average. PMID:25978761

  9. A novel class sensitive hashing technique for large-scale content-based remote sensing image retrieval

    NASA Astrophysics Data System (ADS)

    Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo

    2017-10-01

    This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.

  10. A hybrid spatiotemporal and Hough-based motion estimation approach applied to magnetic resonance cardiac images

    NASA Astrophysics Data System (ADS)

    Carranza, N.; Cristóbal, G.; Sroubek, F.; Ledesma-Carbayo, M. J.; Santos, A.

    2006-08-01

    Myocardial motion analysis and quantification is of utmost importance for analyzing contractile heart abnormalities and it can be a symptom of a coronary artery disease. A fundamental problem in processing sequences of images is the computation of the optical flow, which is an approximation to the real image motion. This paper presents a new algorithm for optical flow estimation based on a spatiotemporal-frequency (STF) approach, more specifically on the computation of the Wigner-Ville distribution (WVD) and the Hough Transform (HT) of the motion sequences. The later is a well-known line and shape detection method very robust against incomplete data and noise. The rationale of using the HT in this context is because it provides a value of the displacement field from the STF representation. In addition, a probabilistic approach based on Gaussian mixtures has been implemented in order to improve the accuracy of the motion detection. Experimental results with synthetic sequences are compared against an implementation of the variational technique for local and global motion estimation, where it is shown that the results obtained here are accurate and robust to noise degradations. Real cardiac magnetic resonance images have been tested and evaluated with the current method.

  11. Localized Spatio-Temporal Constraints for Accelerated CMR Perfusion

    PubMed Central

    Akçakaya, Mehmet; Basha, Tamer A.; Pflugi, Silvio; Foppa, Murilo; Kissinger, Kraig V.; Hauser, Thomas H.; Nezafat, Reza

    2013-01-01

    Purpose To develop and evaluate an image reconstruction technique for cardiac MRI (CMR)perfusion that utilizes localized spatio-temporal constraints. Methods CMR perfusion plays an important role in detecting myocardial ischemia in patients with coronary artery disease. Breath-hold k-t based image acceleration techniques are typically used in CMR perfusion for superior spatial/temporal resolution, and improved coverage. In this study, we propose a novel compressed sensing based image reconstruction technique for CMR perfusion, with applicability to free-breathing examinations. This technique uses local spatio-temporal constraints by regularizing image patches across a small number of dynamics. The technique is compared to conventional dynamic-by-dynamic reconstruction, and sparsity regularization using a temporal principal-component (pc) basis, as well as zerofilled data in multi-slice 2D and 3D CMR perfusion. Qualitative image scores are used (1=poor, 4=excellent) to evaluate the technique in 3D perfusion in 10 patients and 5 healthy subjects. On 4 healthy subjects, the proposed technique was also compared to a breath-hold multi-slice 2D acquisition with parallel imaging in terms of signal intensity curves. Results The proposed technique results in images that are superior in terms of spatial and temporal blurring compared to the other techniques, even in free-breathing datasets. The image scores indicate a significant improvement compared to other techniques in 3D perfusion (2.8±0.5 vs. 2.3±0.5 for x-pc regularization, 1.7±0.5 for dynamic-by-dynamic, 1.1±0.2 for zerofilled). Signal intensity curves indicate similar dynamics of uptake between the proposed method with a 3D acquisition and the breath-hold multi-slice 2D acquisition with parallel imaging. Conclusion The proposed reconstruction utilizes sparsity regularization based on localized information in both spatial and temporal domains for highly-accelerated CMR perfusion with potential utility in free-breathing 3D acquisitions. PMID:24123058

  12. New Researches and Application Progress of Commonly Used Optical Molecular Imaging Technology

    PubMed Central

    Chen, Zhi-Yi; Yang, Feng; Lin, Yan; Zhou, Qiu-Lan; Liao, Yang-Ying

    2014-01-01

    Optical molecular imaging, a new medical imaging technique, is developed based on genomics, proteomics and modern optical imaging technique, characterized by non-invasiveness, non-radiativity, high cost-effectiveness, high resolution, high sensitivity and simple operation in comparison with conventional imaging modalities. Currently, it has become one of the most widely used molecular imaging techniques and has been applied in gene expression regulation and activity detection, biological development and cytological detection, drug research and development, pathogenesis research, pharmaceutical effect evaluation and therapeutic effect evaluation, and so forth, This paper will review the latest researches and application progresses of commonly used optical molecular imaging techniques such as bioluminescence imaging and fluorescence molecular imaging. PMID:24696850

  13. Galaxy morphology - An unsupervised machine learning approach

    NASA Astrophysics Data System (ADS)

    Schutter, A.; Shamir, L.

    2015-09-01

    Structural properties poses valuable information about the formation and evolution of galaxies, and are important for understanding the past, present, and future universe. Here we use unsupervised machine learning methodology to analyze a network of similarities between galaxy morphological types, and automatically deduce a morphological sequence of galaxies. Application of the method to the EFIGI catalog show that the morphological scheme produced by the algorithm is largely in agreement with the De Vaucouleurs system, demonstrating the ability of computer vision and machine learning methods to automatically profile galaxy morphological sequences. The unsupervised analysis method is based on comprehensive computer vision techniques that compute the visual similarities between the different morphological types. Rather than relying on human cognition, the proposed system deduces the similarities between sets of galaxy images in an automatic manner, and is therefore not limited by the number of galaxies being analyzed. The source code of the method is publicly available, and the protocol of the experiment is included in the paper so that the experiment can be replicated, and the method can be used to analyze user-defined datasets of galaxy images.

  14. Potential of Near-Infrared Chemical Imaging as Process Analytical Technology Tool for Continuous Freeze-Drying.

    PubMed

    Brouckaert, Davinia; De Meyer, Laurens; Vanbillemont, Brecht; Van Bockstal, Pieter-Jan; Lammens, Joris; Mortier, Séverine; Corver, Jos; Vervaet, Chris; Nopens, Ingmar; De Beer, Thomas

    2018-04-03

    Near-infrared chemical imaging (NIR-CI) is an emerging tool for process monitoring because it combines the chemical selectivity of vibrational spectroscopy with spatial information. Whereas traditional near-infrared spectroscopy is an attractive technique for water content determination and solid-state investigation of lyophilized products, chemical imaging opens up possibilities for assessing the homogeneity of these critical quality attributes (CQAs) throughout the entire product. In this contribution, we aim to evaluate NIR-CI as a process analytical technology (PAT) tool for at-line inspection of continuously freeze-dried pharmaceutical unit doses based on spin freezing. The chemical images of freeze-dried mannitol samples were resolved via multivariate curve resolution, allowing us to visualize the distribution of mannitol solid forms throughout the entire cake. Second, a mannitol-sucrose formulation was lyophilized with variable drying times for inducing changes in water content. Analyzing the corresponding chemical images via principal component analysis, vial-to-vial variations as well as within-vial inhomogeneity in water content could be detected. Furthermore, a partial least-squares regression model was constructed for quantifying the water content in each pixel of the chemical images. It was hence concluded that NIR-CI is inherently a most promising PAT tool for continuously monitoring freeze-dried samples. Although some practicalities are still to be solved, this analytical technique could be applied in-line for CQA evaluation and for detecting the drying end point.

  15. Dimensional metrology of lab-on-a-chip internal structures: a comparison of optical coherence tomography with confocal fluorescence microscopy.

    PubMed

    Reyes, D R; Halter, M; Hwang, J

    2015-07-01

    The characterization of internal structures in a polymeric microfluidic device, especially of a final product, will require a different set of optical metrology tools than those traditionally used for microelectronic devices. We demonstrate that optical coherence tomography (OCT) imaging is a promising technique to characterize the internal structures of poly(methyl methacrylate) devices where the subsurface structures often cannot be imaged by conventional wide field optical microscopy. The structural details of channels in the devices were imaged with OCT and analyzed with an in-house written ImageJ macro in an effort to identify the structural details of the channel. The dimensional values obtained with OCT were compared with laser-scanning confocal microscopy images of channels filled with a fluorophore solution. Attempts were also made using confocal reflectance and interferometry microscopy to measure the channel dimensions, but artefacts present in the images precluded quantitative analysis. OCT provided the most accurate estimates for the channel height based on an analysis of optical micrographs obtained after destructively slicing the channel with a microtome. OCT may be a promising technique for the future of three-dimensional metrology of critical internal structures in lab-on-a-chip devices because scans can be performed rapidly and noninvasively prior to their use. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  16. GIS-based automated management of highway surface crack inspection system

    NASA Astrophysics Data System (ADS)

    Chung, Hung-Chi; Shinozuka, Masanobu; Soeller, Tony; Girardello, Roberto

    2004-07-01

    An automated in-situ road surface distress surveying and management system, AMPIS, has been developed on the basis of video images within the framework of GIS software. Video image processing techniques are introduced to acquire, process and analyze the road surface images obtained from a moving vehicle. ArcGIS platform is used to integrate the routines of image processing and spatial analysis in handling the full-scale metropolitan highway surface distress detection and data fusion/management. This makes it possible to present user-friendly interfaces in GIS and to provide efficient visualizations of surveyed results not only for the use of transportation engineers to manage road surveying documentations, data acquisition, analysis and management, but also for financial officials to plan maintenance and repair programs and further evaluate the socio-economic impacts of highway degradation and deterioration. A review performed in this study on fundamental principle of Pavement Management System (PMS) and its implementation indicates that the proposed approach of using GIS concept and its tools for PMS application will reshape PMS into a new information technology-based system that can provide convenient and efficient pavement inspection and management.

  17. Terahertz imaging for subsurface investigation of art paintings

    NASA Astrophysics Data System (ADS)

    Locquet, A.; Dong, J.; Melis, M.; Citrin, D. S.

    2017-08-01

    Terahertz (THz) reflective imaging is applied to the stratigraphic and subsurface investigation of oil paintings, with a focus on the mid-20th century Italian painting, `After Fishing', by Ausonio Tanda. THz frequency-wavelet domain deconvolution, which is an enhanced deconvolution technique combining frequency-domain filtering and stationary wavelet shrinkage, is utilized to resolve the optically thin paint layers or brush strokes. Based on the deconvolved terahertz data, the stratigraphy of the painting including the paint layers is reconstructed and subsurface features are clearly revealed. Specifically, THz C-scans and B-scans are analyzed based on different types of deconvolved signals to investigate the subsurface features of the painting, including the identification of regions with more than one paint layer, the refractive-index difference between paint layers, and the distribution of the paint-layer thickness. In addition, THz images are compared with X-ray images. The THz image of the thickness distribution of the paint exhibits a high degree of correlation with the X-ray transmission image, but THz images also reveal defects in the paperboard that cannot be identified in the X-ray image. Therefore, our results demonstrate that THz imaging can be considered as an effective tool for the stratigraphic and subsurface investigation of art paintings. They also open up the way for the use of non-ionizing THz imaging as a potential substitute for ionizing X-ray analysis in nondestructive evaluation of art paintings.

  18. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  19. Imaging and Analysis of Void-defects in Solder Joints Formed in Reduced Gravity using High-Resolution Computed Tomography

    NASA Technical Reports Server (NTRS)

    Easton, John W.; Struk, Peter M.; Rotella, Anthony

    2008-01-01

    As a part of efforts to develop an electronics repair capability for long duration space missions, techniques and materials for soldering components on a circuit board in reduced gravity must be developed. This paper presents results from testing solder joint formation in low gravity on a NASA Reduced Gravity Research Aircraft. The results presented include joints formed using eutectic tin-lead solder and one of the following fluxes: (1) a no-clean flux core, (2) a rosin flux core, and (3) a solid solder wire with external liquid no-clean flux. The solder joints are analyzed with a computed tomography (CT) technique which imaged the interior of the entire solder joint. This replaced an earlier technique that required the solder joint to be destructively ground down revealing a single plane which was subsequently analyzed. The CT analysis technique is described and results presented with implications for future testing as well as implications for the overall electronics repair effort discussed.

  20. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    NASA Astrophysics Data System (ADS)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-09-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (Mea{{n}RHD} , ST{{D}RHD} and C{{V}RHD}{) }~ of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (C{{V}RHD} ) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology.

  1. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    PubMed Central

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-01-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (MeanRHD, and STDRHD CVRHD) of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (CVRHD) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology. PMID:28786399

  2. Delineation of karst terranes in complex environments: Application of modern developments in the wavelet theory and data mining

    NASA Astrophysics Data System (ADS)

    Alperovich, Leonid; Averbuch, Amir; Eppelbaum, Lev; Zheludev, Valery

    2013-04-01

    Karst areas occupy about 14% of the world land. Karst terranes of different origin have caused difficult conditions for building, industrial activity and tourism, and are the source of heightened danger for environment. Mapping of karst (sinkhole) hazards, obviously, will be one of the most significant problems of engineering geophysics in the XXI century. Taking into account the complexity of geological media, some unfavourable environments and known ambiguity of geophysical data analysis, a single geophysical method examination might be insufficient. Wavelet methodology as whole has a significant impact on cardinal problems of geophysical signal processing such as: denoising of signals, enhancement of signals and distinguishing of signals with closely related characteristics and integrated analysis of different geophysical fields (satellite, airborne, earth surface or underground observed data). We developed a three-phase approach to the integrated geophysical localization of subsurface karsts (the same approach could be used for following monitoring of karst dynamics). The first phase consists of modeling devoted to compute various geophysical effects characterizing karst phenomena. The second phase determines development of the signal processing approaches to analyzing of profile or areal geophysical observations. Finally, at the third phase provides integration of these methods in order to create a new method of the combined interpretation of different geophysical data. In the base of our combine geophysical analysis we put modern developments in the wavelet technique of the signal and image processing. The development of the integrated methodology of geophysical field examination will enable to recognizing the karst terranes even by a small ratio of "useful signal - noise" in complex geological environments. For analyzing the geophysical data, we used a technique based on the algorithm to characterize a geophysical image by a limited number of parameters. This set of parameters serves as a signature of the image and is to be utilized for discrimination of images containing karst cavity (K) from the images non-containing karst (N). The constructed algorithm consists of the following main phases: (a) collection of the database, (b) characterization of geophysical images, (c) and dimensionality reduction. Then, each image is characterized by the histogram of the coherency directions. As a result of the previous steps we obtain two sets K and N of the signatures vectors for images from sections containing karst cavity and non-karst subsurface, respectively.

  3. Road extraction from aerial images using a region competition algorithm.

    PubMed

    Amo, Miriam; Martínez, Fernando; Torre, Margarita

    2006-05-01

    In this paper, we present a user-guided method based on the region competition algorithm to extract roads, and therefore we also provide some clues concerning the placement of the points required by the algorithm. The initial points are analyzed in order to find out whether it is necessary to add more initial points, and this process will be based on image information. Not only is the algorithm able to obtain the road centerline, but it also recovers the road sides. An initial simple model is deformed by using region growing techniques to obtain a rough road approximation. This model will be refined by region competition. The result of this approach is that it delivers the simplest output vector information, fully recovering the road details as they are on the image, without performing any kind of symbolization. Therefore, we tried to refine a general road model by using a reliable method to detect transitions between regions. This method is proposed in order to obtain information for feeding large-scale Geographic Information System.

  4. Feasibility evaluation of a neutron grating interferometer with an analyzer grating based on a structured scintillator.

    PubMed

    Kim, Youngju; Kim, Jongyul; Kim, Daeseung; Hussey, Daniel S; Lee, Seung Wook

    2018-03-01

    We introduce an analyzer grating based on a structured scintillator fabricated by a gadolinium oxysulfide powder filling method for a symmetric Talbot-Lau neutron grating interferometer. This is an alternative way to analyze the Talbot self-image of a grating interferometer without using an absorption grating to block neutrons. Since the structured scintillator analyzer grating itself generates the signal for neutron detection, we do not need an additional scintillator screen as an absorption analyzer grating. We have developed and tested an analyzer grating based on a structured scintillator in our symmetric Talbot-Lau neutron grating interferometer to produce high fidelity absorption, differential phase, and dark-field contrast images. The acquired images have been compared to results of a grating interferometer utilizing a typical absorption analyzer grating with two commercial scintillation screens. The analyzer grating based on the structured scintillator enhances interference fringe visibility and shows a great potential for economical fabrication, compact system design, and so on. We report the performance of the analyzer grating based on a structured scintillator and evaluate its feasibility for the neutron grating interferometer.

  5. Feasibility evaluation of a neutron grating interferometer with an analyzer grating based on a structured scintillator

    NASA Astrophysics Data System (ADS)

    Kim, Youngju; Kim, Jongyul; Kim, Daeseung; Hussey, Daniel. S.; Lee, Seung Wook

    2018-03-01

    We introduce an analyzer grating based on a structured scintillator fabricated by a gadolinium oxysulfide powder filling method for a symmetric Talbot-Lau neutron grating interferometer. This is an alternative way to analyze the Talbot self-image of a grating interferometer without using an absorption grating to block neutrons. Since the structured scintillator analyzer grating itself generates the signal for neutron detection, we do not need an additional scintillator screen as an absorption analyzer grating. We have developed and tested an analyzer grating based on a structured scintillator in our symmetric Talbot-Lau neutron grating interferometer to produce high fidelity absorption, differential phase, and dark-field contrast images. The acquired images have been compared to results of a grating interferometer utilizing a typical absorption analyzer grating with two commercial scintillation screens. The analyzer grating based on the structured scintillator enhances interference fringe visibility and shows a great potential for economical fabrication, compact system design, and so on. We report the performance of the analyzer grating based on a structured scintillator and evaluate its feasibility for the neutron grating interferometer.

  6. Systems Biology, Neuroimaging, Neuropsychology, Neuroconnectivity and Traumatic Brain Injury

    PubMed Central

    Bigler, Erin D.

    2016-01-01

    The patient who sustains a traumatic brain injury (TBI) typically undergoes neuroimaging studies, usually in the form of computed tomography (CT) and magnetic resonance imaging (MRI). In most cases the neuroimaging findings are clinically assessed with descriptive statements that provide qualitative information about the presence/absence of visually identifiable abnormalities; though little if any of the potential information in a scan is analyzed in any quantitative manner, except in research settings. Fortunately, major advances have been made, especially during the last decade, in regards to image quantification techniques, especially those that involve automated image analysis methods. This review argues that a systems biology approach to understanding quantitative neuroimaging findings in TBI provides an appropriate framework for better utilizing the information derived from quantitative neuroimaging and its relation with neuropsychological outcome. Different image analysis methods are reviewed in an attempt to integrate quantitative neuroimaging methods with neuropsychological outcome measures and to illustrate how different neuroimaging techniques tap different aspects of TBI-related neuropathology. Likewise, how different neuropathologies may relate to neuropsychological outcome is explored by examining how damage influences brain connectivity and neural networks. Emphasis is placed on the dynamic changes that occur following TBI and how best to capture those pathologies via different neuroimaging methods. However, traditional clinical neuropsychological techniques are not well suited for interpretation based on contemporary and advanced neuroimaging methods and network analyses. Significant improvements need to be made in the cognitive and behavioral assessment of the brain injured individual to better interface with advances in neuroimaging-based network analyses. By viewing both neuroimaging and neuropsychological processes within a systems biology perspective could represent a significant advancement for the field. PMID:27555810

  7. Using Image Analysis to Explore Changes In Bacterial Mat Coverage at the Base of a Hydrothermal Vent within the Caldera of Axial Seamount

    NASA Astrophysics Data System (ADS)

    Knuth, F.; Crone, T. J.; Marburg, A.

    2017-12-01

    The Ocean Observatories Initiative's (OOI) Cabled Array is delivering real-time high-definition video data from an HD video camera (CAMHD), installed at the Mushroom hydrothermal vent in the ASHES hydrothermal vent field within the caldera of Axial Seamount, an active submarine volcano located approximately 450 kilometers off the coast of Washington at a depth of 1,542 m. Every three hours the camera pans, zooms and focuses in on nine distinct scenes of scientific interest across the vent, producing 14-minute-long videos during each run. This standardized video sampling routine enables scientists to programmatically analyze the content of the video using automated image analysis techniques. Each scene-specific time series dataset can service a wide range of scientific investigations, including the estimation of bacterial flux into the system by quantifying chemosynthetic bacterial clusters (floc) present in the water column, relating periodicity in hydrothermal vent fluid flow to earth tides, measuring vent chimney growth in response to changing hydrothermal fluid flow rates, or mapping the patterns of fauna colonization, distribution and composition across the vent over time. We are currently investigating the seventh scene in the sampling routine, focused on the bacterial mat covering the seafloor at the base of the vent. We quantify the change in bacterial mat coverage over time using image analysis techniques, and examine the relationship between mat coverage, fluid flow processes, episodic chimney collapse events, and other processes observed by Cabled Array instrumentation. This analysis is being conducted using cloud-enabled computer vision processing techniques, programmatic image analysis, and time-lapse video data collected over the course of the first CAMHD deployment, from November 2015 to July 2016.

  8. A data compression technique for synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Minden, G. J.

    1986-01-01

    A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable rate modification of block truncation coding (BTC). A data rate of approximately 1.6 bit/pixel is achieved with the technique while maintaining the image quality and cultural (pointlike) targets. The algorithm requires no large data storage and is computationally simple.

  9. Evaluation of a rule-based compositing technique for Landsat-5 TM and Landsat-7 ETM+ images

    NASA Astrophysics Data System (ADS)

    Lück, W.; van Niekerk, A.

    2016-05-01

    Image compositing is a multi-objective optimization process. Its goal is to produce a seamless cloud and artefact-free artificial image. This is achieved by aggregating image observations and by replacing poor and cloudy data with good observations from imagery acquired within the timeframe of interest. This compositing process aims to minimise the visual artefacts which could result from different radiometric properties, caused by atmospheric conditions, phenologic patterns and land cover changes. It has the following requirements: (1) image compositing must be cloud free, which requires the detection of clouds and shadows, and (2) the image composite must be seamless, minimizing artefacts and visible across inter image seams. This study proposes a new rule-based compositing technique (RBC) that combines the strengths of several existing methods. A quantitative and qualitative evaluation is made of the RBC technique by comparing it to the maximum NDVI (MaxNDVI), minimum red (MinRed) and maximum ratio (MaxRatio) compositing techniques. A total of 174 Landsat TM and ETM+ images, covering three study sites and three different timeframes for each site, are used in the evaluation. A new set of quantitative/qualitative evaluation techniques for compositing quality measurement was developed and showed that the RBC technique outperformed all other techniques, with MaxRatio, MaxNDVI, and MinRed techniques in order of performance from best to worst.

  10. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    PubMed

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  11. Classification of microscopic images of breast tissue

    NASA Astrophysics Data System (ADS)

    Ballerini, Lucia; Franzen, Lennart

    2004-05-01

    Breast cancer is the most common form of cancer among women. The diagnosis is usually performed by the pathologist, that subjectively evaluates tissue samples. The aim of our research is to develop techniques for the automatic classification of cancerous tissue, by analyzing histological samples of intact tissue taken with a biopsy. In our study, we considered 200 images presenting four different conditions: normal tissue, fibroadenosis, ductal cancer and lobular cancer. Methods to extract features have been investigated and described. One method is based on granulometries, which are size-shape descriptors widely used in mathematical morphology. Applications of granulometries lead to distribution functions whose moments are used as features. A second method is based on fractal geometry, that seems very suitable to quantify biological structures. The fractal dimension of binary images has been computed using the euclidean distance mapping. Image classification has then been performed using the extracted features as input of a back-propagation neural network. A new method that combines genetic algorithms and morphological filters has been also investigated. In this case, the classification is based on a correlation measure. Very encouraging results have been obtained with pilot experiments using a small subset of images as training set. Experimental results indicate the effectiveness of the proposed methods. Cancerous tissue was correctly classified in 92.5% of the cases.

  12. Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors.

    PubMed

    Dutton, Neale A W; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K

    2016-07-20

    SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.

  13. 90Y Liver Radioembolization Imaging Using Amplitude-Based Gated PET/CT.

    PubMed

    Osborne, Dustin R; Acuff, Shelley; Neveu, Melissa; Kaman, Austin; Syed, Mumtaz; Fu, Yitong

    2017-05-01

    The usage of PET/CT to monitor patients with hepatocellular carcinoma following Y radioembolization has increased; however, image quality is often poor because of low count efficiency and respiratory motion. Motion can be corrected using gating techniques but at the expense of additional image noise. Amplitude-based gating has been shown to improve quantification in FDG PET, but few have used this technique in Y liver imaging. The patients shown in this work indicate that amplitude-based gating can be used in Y PET/CT liver imaging to provide motion-corrected images with higher estimates of activity concentration that may improve posttherapy dosimetry.

  14. Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.

    PubMed

    Saadia, Ayesha; Rashdi, Adnan

    2016-12-01

    Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques reported in the literature. The proposed method for denoising of Echocardiographic images is effective in noise suppression/removal. It not only removes noise from an image but also preserves edges and other important structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Using endmembers in AVIRIS images to estimate changes in vegetative biomass

    NASA Technical Reports Server (NTRS)

    Smith, Milton O.; Adams, John B.; Ustin, Susan L.; Roberts, Dar A.

    1992-01-01

    Field techniques for estimating vegetative biomass are labor intensive, and rarely are used to monitor changes in biomass over time. Remote-sensing offers an attractive alternative to field measurements; however, because there is no simple correspondence between encoded radiance in multispectral images and biomass, it is not possible to measure vegetative biomass directly from AVIRIS images. Ways to estimate vegetative biomass by identifying community types and then applying biomass scalars derived from field measurements are investigated. Field measurements of community-scale vegetative biomass can be made, at least for local areas, but it is not always possible to identify vegetation communities unambiguously using remote measurements and conventional image-processing techniques. Furthermore, even when communities are well characterized in a single image, it typically is difficult to assess the extent and nature of changes in a time series of images, owing to uncertainties introduced by variations in illumination geometry, atmospheric attenuation, and instrumental responses. Our objective is to develop an improved method based on spectral mixture analysis to characterize and identify vegetative communities, that can be applied to multi-temporal AVIRIS and other types of images. In previous studies, multi-temporal data sets (AVIRIS and TM) of Owens Valley, CA were analyzed and vegetation communities were defined in terms of fractions of reference (laboratory and field) endmember spectra. An advantage of converting an image to fractions of reference endmembers is that, although fractions in a given pixel may vary from image to image in a time series, the endmembers themselves typically are constant, thus providing a consistent frame of reference.

  16. Gabor filter for the segmentation of skin lesions from ultrasonographic images

    NASA Astrophysics Data System (ADS)

    Petrella, Lorena I.; Gómez, W.; Alvarenga, André V.; Pereira, Wagner C. A.

    2012-05-01

    The present work applies Gabor filters bank for texture analysis of skin lesions images, obtained by ultrasound biomicroscopy. The regions affected by the lesions were differentiated from surrounding tissue in all the analyzed cases; however the accuracy of the traced borders showed some limitations in part of the images. Future steps are being contemplated, attempting to enhance the technique performance.

  17. Quantum dot-based molecular imaging of cancer cell growth using a clone formation assay.

    PubMed

    Geng, Xia-Fei; Fang, Min; Liu, Shao-Ping; Li, Yan

    2016-10-01

    This aim of the present study was to investigate clonal growth behavior and analyze the proliferation characteristics of cancer cells. The MCF‑7 human breast cancer cell line, SW480 human colon cancer cell line and SGC7901 human gastric cancer cell line were selected to investigate the morphology of cell clones. Quantum dot‑based molecular targeted imaging techniques (which stained pan‑cytokeratin in the cytoplasm green and Ki67 in the cell nucleus yellow or red) were used to investigate the clone formation rate, cell morphology, discrete tendency, and Ki67 expression and distribution in clones. From the cell clone formation assay, the MCF‑7, SW480 and SGC7901 cells were observed to form clones on days 6, 8 and 12 of cell culture, respectively. These three types of cells had heterogeneous morphology, large nuclear:cytoplasmic ratios, and conspicuous pathological mitotic features. The cells at the clone periphery formed multiple pseudopodium. In certain clones, cancer cells at the borderline were separated from the central cell clusters or presented a discrete tendency. With quantum dot‑based molecular targeted imaging techniques, cells with strong Ki67 expression were predominantly shown to be distributed at the clone periphery, or concentrated on one side of the clones. In conclusion, cancer cell clones showed asymmetric growth behavior, and Ki67 was widely expressed in clones of these three cell lines, with strong expression around the clones, or aggregated at one side. Cell clone formation assay based on quantum dots molecular imaging offered a novel method to study the proliferative features of cancer cells, thus providing a further insight into tumor biology.

  18. Diffraction based overlay and image based overlay on production flow for advanced technology node

    NASA Astrophysics Data System (ADS)

    Blancquaert, Yoann; Dezauzier, Christophe

    2013-04-01

    One of the main challenges for lithography step is the overlay control. For the advanced technology node like 28nm and 14nm, the overlay budget becomes very tight. Two overlay techniques compete in our advanced semiconductor manufacturing: the Diffraction based Overlay (DBO) with the YieldStar S200 (ASML) and the Image Based Overlay (IBO) with ARCHER (KLA). In this paper we will compare these two methods through 3 critical production layers: Poly Gate, Contact and first metal layer. We will show the overlay results of the 2 techniques, explore the accuracy and compare the total measurement uncertainty (TMU) for the standard overlay targets of both techniques. We will see also the response and impact for the Image Based Overlay and Diffraction Based Overlay techniques through a process change like an additional Hardmask TEOS layer on the front-end stack. The importance of the target design is approached; we will propose more adapted design for image based targets. Finally we will present embedded targets in the 14 FDSOI with first results.

  19. Determining biosonar images using sparse representations.

    PubMed

    Fontaine, Bertrand; Peremans, Herbert

    2009-05-01

    Echolocating bats are thought to be able to create an image of their environment by emitting pulses and analyzing the reflected echoes. In this paper, the theory of sparse representations and its more recent further development into compressed sensing are applied to this biosonar image formation task. Considering the target image representation as sparse allows formulation of this inverse problem as a convex optimization problem for which well defined and efficient solution methods have been established. The resulting technique, referred to as L1-minimization, is applied to simulated data to analyze its performance relative to delay accuracy and delay resolution experiments. This method performs comparably to the coherent receiver for the delay accuracy experiments, is quite robust to noise, and can reconstruct complex target impulse responses as generated by many closely spaced reflectors with different reflection strengths. This same technique, in addition to reconstructing biosonar target images, can be used to simultaneously localize these complex targets by interpreting location cues induced by the bat's head related transfer function. Finally, a tentative explanation is proposed for specific bat behavioral experiments in terms of the properties of target images as reconstructed by the L1-minimization method.

  20. Structural Image Analysis of the Brain in Neuropsychology Using Magnetic Resonance Imaging (MRI) Techniques.

    PubMed

    Bigler, Erin D

    2015-09-01

    Magnetic resonance imaging (MRI) of the brain provides exceptional image quality for visualization and neuroanatomical classification of brain structure. A variety of image analysis techniques provide both qualitative as well as quantitative methods to relate brain structure with neuropsychological outcome and are reviewed herein. Of particular importance are more automated methods that permit analysis of a broad spectrum of anatomical measures including volume, thickness and shape. The challenge for neuropsychology is which metric to use, for which disorder and the timing of when image analysis methods are applied to assess brain structure and pathology. A basic overview is provided as to the anatomical and pathoanatomical relations of different MRI sequences in assessing normal and abnormal findings. Some interpretive guidelines are offered including factors related to similarity and symmetry of typical brain development along with size-normalcy features of brain anatomy related to function. The review concludes with a detailed example of various quantitative techniques applied to analyzing brain structure for neuropsychological outcome studies in traumatic brain injury.

Top