Sample records for multi-parameter image analysis

  1. Detecting ordered small molecule drug aggregates in live macrophages: a multi-parameter microscope image data acquisition and analysis strategy

    PubMed Central

    Rzeczycki, Phillip; Yoon, Gi Sang; Keswani, Rahul K.; Sud, Sudha; Stringer, Kathleen A.; Rosania, Gus R.

    2017-01-01

    Following prolonged administration, certain orally bioavailable but poorly soluble small molecule drugs are prone to precipitate out and form crystal-like drug inclusions (CLDIs) within the cells of living organisms. In this research, we present a quantitative multi-parameter imaging platform for measuring the fluorescence and polarization diattenuation signals of cells harboring intracellular CLDIs. To validate the imaging system, the FDA-approved drug clofazimine (CFZ) was used as a model compound. Our results demonstrated that a quantitative multi-parameter microscopy image analysis platform can be used to study drug sequestering macrophages, and to detect the formation of ordered molecular aggregates formed by poorly soluble small molecule drugs in animals. PMID:28270989

  2. Detecting ordered small molecule drug aggregates in live macrophages: a multi-parameter microscope image data acquisition and analysis strategy.

    PubMed

    Rzeczycki, Phillip; Yoon, Gi Sang; Keswani, Rahul K; Sud, Sudha; Stringer, Kathleen A; Rosania, Gus R

    2017-02-01

    Following prolonged administration, certain orally bioavailable but poorly soluble small molecule drugs are prone to precipitate out and form crystal-like drug inclusions (CLDIs) within the cells of living organisms. In this research, we present a quantitative multi-parameter imaging platform for measuring the fluorescence and polarization diattenuation signals of cells harboring intracellular CLDIs. To validate the imaging system, the FDA-approved drug clofazimine (CFZ) was used as a model compound. Our results demonstrated that a quantitative multi-parameter microscopy image analysis platform can be used to study drug sequestering macrophages, and to detect the formation of ordered molecular aggregates formed by poorly soluble small molecule drugs in animals.

  3. Analysis on unevenness of skin color using the melanin and hemoglobin components separated by independent component analysis of skin color image

    NASA Astrophysics Data System (ADS)

    Ojima, Nobutoshi; Fujiwara, Izumi; Inoue, Yayoi; Tsumura, Norimichi; Nakaguchi, Toshiya; Iwata, Kayoko

    2011-03-01

    Uneven distribution of skin color is one of the biggest concerns about facial skin appearance. Recently several techniques to analyze skin color have been introduced by separating skin color information into chromophore components, such as melanin and hemoglobin. However, there are not many reports on quantitative analysis of unevenness of skin color by considering type of chromophore, clusters of different sizes and concentration of the each chromophore. We propose a new image analysis and simulation method based on chromophore analysis and spatial frequency analysis. This method is mainly composed of three techniques: independent component analysis (ICA) to extract hemoglobin and melanin chromophores from a single skin color image, an image pyramid technique which decomposes each chromophore into multi-resolution images, which can be used for identifying different sizes of clusters or spatial frequencies, and analysis of the histogram obtained from each multi-resolution image to extract unevenness parameters. As the application of the method, we also introduce an image processing technique to change unevenness of melanin component. As the result, the method showed high capabilities to analyze unevenness of each skin chromophore: 1) Vague unevenness on skin could be discriminated from noticeable pigmentation such as freckles or acne. 2) By analyzing the unevenness parameters obtained from each multi-resolution image for Japanese ladies, agerelated changes were observed in the parameters of middle spatial frequency. 3) An image processing system modulating the parameters was proposed to change unevenness of skin images along the axis of the obtained age-related change in real time.

  4. Quantitative analysis of vascular parameters for micro-CT imaging of vascular networks with multi-resolution.

    PubMed

    Zhao, Fengjun; Liang, Jimin; Chen, Xueli; Liu, Junting; Chen, Dongmei; Yang, Xiang; Tian, Jie

    2016-03-01

    Previous studies showed that all the vascular parameters from both the morphological and topological parameters were affected with the altering of imaging resolutions. However, neither the sensitivity analysis of the vascular parameters at multiple resolutions nor the distinguishability estimation of vascular parameters from different data groups has been discussed. In this paper, we proposed a quantitative analysis method of vascular parameters for vascular networks of multi-resolution, by analyzing the sensitivity of vascular parameters at multiple resolutions and estimating the distinguishability of vascular parameters from different data groups. Combining the sensitivity and distinguishability, we designed a hybrid formulation to estimate the integrated performance of vascular parameters in a multi-resolution framework. Among the vascular parameters, degree of anisotropy and junction degree were two insensitive parameters that were nearly irrelevant with resolution degradation; vascular area, connectivity density, vascular length, vascular junction and segment number were five parameters that could better distinguish the vascular networks from different groups and abide by the ground truth. Vascular area, connectivity density, vascular length and segment number not only were insensitive to multi-resolution but could also better distinguish vascular networks from different groups, which provided guidance for the quantification of the vascular networks in multi-resolution frameworks.

  5. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2009-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast

  6. a Region-Based Multi-Scale Approach for Object-Based Image Analysis

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz Erdemir, M.; Tonbul, H.

    2016-06-01

    Within the last two decades, object-based image analysis (OBIA) considering objects (i.e. groups of pixels) instead of pixels has gained popularity and attracted increasing interest. The most important stage of the OBIA is image segmentation that groups spectrally similar adjacent pixels considering not only the spectral features but also spatial and textural features. Although there are several parameters (scale, shape, compactness and band weights) to be set by the analyst, scale parameter stands out the most important parameter in segmentation process. Estimating optimal scale parameter is crucially important to increase the classification accuracy that depends on image resolution, image object size and characteristics of the study area. In this study, two scale-selection strategies were implemented in the image segmentation process using pan-sharped Qickbird-2 image. The first strategy estimates optimal scale parameters for the eight sub-regions. For this purpose, the local variance/rate of change (LV-RoC) graphs produced by the ESP-2 tool were analysed to determine fine, moderate and coarse scales for each region. In the second strategy, the image was segmented using the three candidate scale values (fine, moderate, coarse) determined from the LV-RoC graph calculated for whole image. The nearest neighbour classifier was applied in all segmentation experiments and equal number of pixels was randomly selected to calculate accuracy metrics (overall accuracy and kappa coefficient). Comparison of region-based and image-based segmentation was carried out on the classified images and found that region-based multi-scale OBIA produced significantly more accurate results than image-based single-scale OBIA. The difference in classification accuracy reached to 10% in terms of overall accuracy.

  7. Multi-tissue partial volume quantification in multi-contrast MRI using an optimised spectral unmixing approach.

    PubMed

    Collewet, Guylaine; Moussaoui, Saïd; Deligny, Cécile; Lucas, Tiphaine; Idier, Jérôme

    2018-06-01

    Multi-tissue partial volume estimation in MRI images is investigated with a viewpoint related to spectral unmixing as used in hyperspectral imaging. The main contribution of this paper is twofold. It firstly proposes a theoretical analysis of the statistical optimality conditions of the proportion estimation problem, which in the context of multi-contrast MRI data acquisition allows to appropriately set the imaging sequence parameters. Secondly, an efficient proportion quantification algorithm based on the minimisation of a penalised least-square criterion incorporating a regularity constraint on the spatial distribution of the proportions is proposed. Furthermore, the resulting developments are discussed using empirical simulations. The practical usefulness of the spectral unmixing approach for partial volume quantification in MRI is illustrated through an application to food analysis on the proving of a Danish pastry. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Whole-body PET parametric imaging employing direct 4D nested reconstruction and a generalized non-linear Patlak model

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Rahmim, Arman

    2014-03-01

    Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.

  9. An empirical study on the utility of BRDF model parameters and topographic parameters for mapping vegetation in a semi-arid region with MISR imagery

    USDA-ARS?s Scientific Manuscript database

    Multi-angle remote sensing has been proved useful for mapping vegetation community types in desert regions. Based on Multi-angle Imaging Spectro-Radiometer (MISR) multi-angular images, this study compares roles played by Bidirectional Reflectance Distribution Function (BRDF) model parameters with th...

  10. Studying Axon-Astrocyte Functional Interactions by 3D Two-Photon Ca2+ Imaging: A Practical Guide to Experiments and "Big Data" Analysis.

    PubMed

    Savtchouk, Iaroslav; Carriero, Giovanni; Volterra, Andrea

    2018-01-01

    Recent advances in fast volumetric imaging have enabled rapid generation of large amounts of multi-dimensional functional data. While many computer frameworks exist for data storage and analysis of the multi-gigabyte Ca 2+ imaging experiments in neurons, they are less useful for analyzing Ca 2+ dynamics in astrocytes, where transients do not follow a predictable spatio-temporal distribution pattern. In this manuscript, we provide a detailed protocol and commentary for recording and analyzing three-dimensional (3D) Ca 2+ transients through time in GCaMP6f-expressing astrocytes of adult brain slices in response to axonal stimulation, using our recently developed tools to perform interactive exploration, filtering, and time-correlation analysis of the transients. In addition to the protocol, we release our in-house software tools and discuss parameters pertinent to conducting axonal stimulation/response experiments across various brain regions and conditions. Our software tools are available from the Volterra Lab webpage at https://wwwfbm.unil.ch/dnf/group/glia-an-active-synaptic-partner/member/volterra-andrea-volterra in the form of software plugins for Image J (NIH)-a de facto standard in scientific image analysis. Three programs are available: MultiROI_TZ_profiler for interactive graphing of several movable ROIs simultaneously, Gaussian_Filter5D for Gaussian filtering in several dimensions, and Correlation_Calculator for computing various cross-correlation parameters on voxel collections through time.

  11. Scattering and absorption measurements of cervical tissues measures using low cost multi-spectral imaging

    NASA Astrophysics Data System (ADS)

    Bernat, Amir S.; Bar-Am, Kfir; Cataldo, Leigh; Bolton, Frank J.; Kahn, Bruce S.; Levitz, David

    2018-02-01

    Cervical cancer is a leading cause of death for women in low resource settings. In order to better detect cervical dysplasia, a low cost multi-spectral colposcope was developed utilizing low costs LEDs and an area scan camera. The device is capable of both traditional colposcopic imaging and multi-spectral image capture. Following initial bench testing, the device was deployed to a gynecology clinic where it was used to image patients in a colposcopy setting. Both traditional colposcopic images and spectral data from patients were uploaded to a cloud server for remote analysis. Multi-spectral imaging ( 30 second capture) took place before any clinical procedure; the standard of care was followed thereafter. If acetic acid was used in the standard of care, a post-acetowhitening colposcopic image was also captured. In analyzing the data, normal and abnormal regions were identified in the colposcopic images by an expert clinician. Spectral data were fit to a theoretical model based on diffusion theory, yielding information on scattering and absorption parameters. Data were grouped according to clinician labeling of the tissue, as well as any additional clinical test results available (Pap, HPV, biopsy). Altogether, N=20 patients were imaged in this study, with 9 of them abnormal. In comparing normal and abnormal regions of interest from patients, substantial differences were measured in blood content, while differences in oxygen saturation parameters were more subtle. These results suggest that optical measurements made using low cost spectral imaging systems can distinguish between normal and pathological tissues.

  12. Description of patellar movement by 3D parameters obtained from dynamic CT acquisition

    NASA Astrophysics Data System (ADS)

    de Sá Rebelo, Marina; Moreno, Ramon Alfredo; Gobbi, Riccardo Gomes; Camanho, Gilberto Luis; de Ávila, Luiz Francisco Rodrigues; Demange, Marco Kawamura; Pecora, Jose Ricardo; Gutierrez, Marco Antonio

    2014-03-01

    The patellofemoral joint is critical in the biomechanics of the knee. The patellofemoral instability is one condition that generates pain, functional impairment and often requires surgery as part of orthopedic treatment. The analysis of the patellofemoral dynamics has been performed by several medical image modalities. The clinical parameters assessed are mainly based on 2D measurements, such as the patellar tilt angle and the lateral shift among others. Besides, the acquisition protocols are mostly performed with the leg laid static at fixed angles. The use of helical multi slice CT scanner can allow the capture and display of the joint's movement performed actively by the patient. However, the orthopedic applications of this scanner have not yet been standardized or widespread. In this work we present a method to evaluate the biomechanics of the patellofemoral joint during active contraction using multi slice CT images. This approach can greatly improve the analysis of patellar instability by displaying the physiology during muscle contraction. The movement was evaluated by computing its 3D displacements and rotations from different knee angles. The first processing step registered the images in both angles based on the femuŕs position. The transformation matrix of the patella from the images was then calculated, which provided the rotations and translations performed by the patella from its position in the first image to its position in the second image. Analysis of these parameters for all frames provided real 3D information about the patellar displacement.

  13. A generalized parametric response mapping method for analysis of multi-parametric imaging: A feasibility study with application to glioblastoma.

    PubMed

    Lausch, Anthony; Yeung, Timothy Pok-Chi; Chen, Jeff; Law, Elton; Wang, Yong; Urbini, Benedetta; Donelli, Filippo; Manco, Luigi; Fainardi, Enrico; Lee, Ting-Yim; Wong, Eugene

    2017-11-01

    Parametric response map (PRM) analysis of functional imaging has been shown to be an effective tool for early prediction of cancer treatment outcomes and may also be well-suited toward guiding personalized adaptive radiotherapy (RT) strategies such as sub-volume boosting. However, the PRM method was primarily designed for analysis of longitudinally acquired pairs of single-parameter image data. The purpose of this study was to demonstrate the feasibility of a generalized parametric response map analysis framework, which enables analysis of multi-parametric data while maintaining the key advantages of the original PRM method. MRI-derived apparent diffusion coefficient (ADC) and relative cerebral blood volume (rCBV) maps acquired at 1 and 3-months post-RT for 19 patients with high-grade glioma were used to demonstrate the algorithm. Images were first co-registered and then standardized using normal tissue image intensity values. Tumor voxels were then plotted in a four-dimensional Cartesian space with coordinate values equal to a voxel's image intensity in each of the image volumes and an origin defined as the multi-parametric mean of normal tissue image intensity values. Voxel positions were orthogonally projected onto a line defined by the origin and a pre-determined response vector. The voxels are subsequently classified as positive, negative or nil, according to whether projected positions along the response vector exceeded a threshold distance from the origin. The response vector was selected by identifying the direction in which the standard deviation of tumor image intensity values was maximally different between responding and non-responding patients within a training dataset. Voxel classifications were visualized via familiar three-class response maps and then the fraction of tumor voxels associated with each of the classes was investigated for predictive utility analogous to the original PRM method. Independent PRM and MPRM analyses of the contrast-enhancing lesion (CEL) and a 1 cm shell of surrounding peri-tumoral tissue were performed. Prediction using tumor volume metrics was also investigated. Leave-one-out cross validation (LOOCV) was used in combination with permutation testing to assess preliminary predictive efficacy and estimate statistically robust P-values. The predictive endpoint was overall survival (OS) greater than or equal to the median OS of 18.2 months. Single-parameter PRM and multi-parametric response maps (MPRMs) were generated for each patient and used to predict OS via the LOOCV. Tumor volume metrics (P ≥ 0.071 ± 0.01) and single-parameter PRM analyses (P ≥ 0.170 ± 0.01) were not found to be predictive of OS within this study. MPRM analysis of the peri-tumoral region but not the CEL was found to be predictive of OS with a classification sensitivity, specificity and accuracy of 80%, 100%, and 89%, respectively (P = 0.001 ± 0.01). The feasibility of a generalized MPRM analysis framework was demonstrated with improved prediction of overall survival compared to the original single-parameter method when applied to a glioblastoma dataset. The proposed algorithm takes the spatial heterogeneity in multi-parametric response into consideration and enables visualization. MPRM analysis of peri-tumoral regions was shown to have predictive potential supporting further investigation of a larger glioblastoma dataset. © 2017 American Association of Physicists in Medicine.

  14. Estimation of Dynamical Parameters in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark O.

    2004-01-01

    In this study a new technique is used to derive dynamical parameters out of atmospheric data sets. This technique, called the structure tensor technique, can be used to estimate dynamical parameters such as motion, source strengths, diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. The fundamental algorithm will be extended to the analysis of multi- channel (e.g. multi trace gas) image sequences and to provide solutions to the extended aperture problem. In this study sensitivity studies have been performed to determine the usability of this technique for data sets with different resolution in time and space and different dimensions.

  15. A multimodal image sensor system for identifying water stress in grapevines

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong

    2012-11-01

    Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.

  16. Cell Motility Dynamics: A Novel Segmentation Algorithm to Quantify Multi-Cellular Bright Field Microscopy Images

    PubMed Central

    Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan

    2011-01-01

    Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications. PMID:22096600

  17. Cell motility dynamics: a novel segmentation algorithm to quantify multi-cellular bright field microscopy images.

    PubMed

    Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan

    2011-01-01

    Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single-cell processing to perform objective, accurate quantitative analyses for various biological applications.

  18. Quantitative analysis of ultrasonic images of fibrotic liver using co-occurrence matrix based on multi-Rayleigh model

    NASA Astrophysics Data System (ADS)

    Isono, Hiroshi; Hirata, Shinnosuke; Hachiya, Hiroyuki

    2015-07-01

    In medical ultrasonic images of liver disease, a texture with a speckle pattern indicates a microscopic structure such as nodules surrounded by fibrous tissues in hepatitis or cirrhosis. We have been applying texture analysis based on a co-occurrence matrix to ultrasonic images of fibrotic liver for quantitative tissue characterization. A co-occurrence matrix consists of the probability distribution of brightness of pixel pairs specified with spatial parameters and gives new information on liver disease. Ultrasonic images of different types of fibrotic liver were simulated and the texture-feature contrast was calculated to quantify the co-occurrence matrices generated from the images. The results show that the contrast converges with a value that can be theoretically estimated using a multi-Rayleigh model of echo signal amplitude distribution. We also found that the contrast value increases as liver fibrosis progresses and fluctuates depending on the size of fibrotic structure.

  19. Determination of gravity wave parameters in the airglow combining photometer and imager data

    NASA Astrophysics Data System (ADS)

    Nyassor, Prosper K.; Arlen Buriti, Ricardo; Paulino, Igo; Medeiros, Amauri F.; Takahashi, Hisao; Wrasse, Cristiano M.; Gobbi, Delano

    2018-05-01

    Mesospheric airglow measurements of two or three layers were used to characterize both vertical and horizontal parameters of gravity waves. The data set was acquired coincidentally from a multi-channel filter (Multi-3) photometer and an all-sky imager located at São João do Cariri (7.4° S, 36.5° W) in the equatorial region from 2001 to 2007. Using a least-square fitting and wavelet analysis technique, the phase and amplitude of each observed wave were determined, as well as the amplitude growth. Using the dispersion relation of gravity waves, the vertical and horizontal wavelengths were estimated and compared to the horizontal wavelength obtained from the keogram analysis of the images observed by an all-sky imager. The results show that both horizontal and vertical wavelengths, obtained from the dispersion relation and keogram analysis, agree very well for the waves observed on the nights of 14 October and 18 December 2006. The determined parameters showed that the observed wave on the night of 18 December 2006 had a period of ˜ 43.8 ± 2.19 min, with the horizontal wavelength of 235.66 ± 11.78 km having a downward phase propagation, whereas that of 14 October 2006 propagated with a period of ˜ 36.00 ± 1.80 min with a horizontal wavelength of ˜ 195 ± 9.80 km, and with an upward phase propagation. The observation of a wave taken by a photometer and an all-sky imager allowed us to conclude that the same wave could be observed by both instruments, permitting the investigation of the two-dimensional wave parameter.

  20. Direct Parametric Image Reconstruction in Reduced Parameter Space for Rapid Multi-Tracer PET Imaging.

    PubMed

    Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu

    2015-02-12

    The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.

  1. Quantitative Assessment of Retinopathy Using Multi-parameter Image Analysis

    PubMed Central

    Ghanian, Zahra; Staniszewski, Kevin; Jamali, Nasim; Sepehr, Reyhaneh; Wang, Shoujian; Sorenson, Christine M.; Sheibani, Nader; Ranji, Mahsa

    2016-01-01

    A multi-parameter quantification method was implemented to quantify retinal vascular injuries in microscopic images of clinically relevant eye diseases. This method was applied to wholemount retinal trypsin digest images of diabetic Akita/+, and bcl-2 knocked out mice models. Five unique features of retinal vasculature were extracted to monitor early structural changes and retinopathy, as well as quantifying the disease progression. Our approach was validated through simulations of retinal images. Results showed fewer number of cells (P = 5.1205e-05), greater population ratios of endothelial cells to pericytes (PCs) (P = 5.1772e-04; an indicator of PC loss), higher fractal dimension (P = 8.2202e-05), smaller vessel coverage (P = 1.4214e-05), and greater number of acellular capillaries (P = 7.0414e-04) for diabetic retina as compared to normal retina. Quantification using the present method would be helpful in evaluating physiological and pathological retinopathy in a high-throughput and reproducible manner. PMID:27186534

  2. Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment

    NASA Astrophysics Data System (ADS)

    David, S.; Visvikis, D.; Roux, C.; Hatt, M.

    2011-09-01

    In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.

  3. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization

    NASA Astrophysics Data System (ADS)

    Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li

    2018-04-01

    Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.

  4. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  5. Noncontact measurement of heart rate using facial video illuminated under natural light and signal weighted analysis.

    PubMed

    Yan, Yonggang; Ma, Xiang; Yao, Lifeng; Ouyang, Jianfei

    2015-01-01

    Non-contact and remote measurements of vital physical signals are important for reliable and comfortable physiological self-assessment. We presented a novel optical imaging-based method to measure the vital physical signals. Using a digital camera and ambient light, the cardiovascular pulse waves were extracted better from human color facial videos correctly. And the vital physiological parameters like heart rate were measured using a proposed signal-weighted analysis method. The measured HRs consistent with those measured simultaneously with reference technologies (r=0.94, p<0.001 for HR). The results show that the imaging-based method is suitable for measuring the physiological parameters, and provide a reliable and comfortable measurement mode. The study lays a physical foundation for measuring multi-physiological parameters of human noninvasively.

  6. Local variance for multi-scale analysis in geomorphometry.

    PubMed

    Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas

    2011-07-15

    Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements.

  7. Local variance for multi-scale analysis in geomorphometry

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas

    2011-01-01

    Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138

  8. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  9. MultiDrizzle: An Integrated Pyraf Script for Registering, Cleaning and Combining Images

    NASA Astrophysics Data System (ADS)

    Koekemoer, A. M.; Fruchter, A. S.; Hook, R. N.; Hack, W.

    We present the new PyRAF-based `MultiDrizzle' script, which is aimed at providing a one-step approach to combining dithered HST images. The purpose of this script is to allow easy interaction with the complex suite of tasks in the IRAF/STSDAS `dither' package, as well as the new `PyDrizzle' task, while at the same time retaining the flexibility of these tasks through a number of parameters. These parameters control the various individual steps, such as sky subtraction, image registration, `drizzling' onto separate output images, creation of a clean median image, transformation of the median with `blot' and creation of cosmic ray masks, as well as the final image combination step using `drizzle'. The default parameters of all the steps are set so that the task will work automatically for a wide variety of different types of images, while at the same time allowing adjustment of individual parameters for special cases. The script currently works for both ACS and WFPC2 data, and is now being tested on STIS and NICMOS images. We describe the operation of the script and the effect of various parameters, particularly in the context of combining images from dithered observations using ACS and WFPC2. Additional information is also available at the `MultiDrizzle' home page: http://www.stsci.edu/~koekemoe/multidrizzle/

  10. Associative image analysis: a method for automated quantification of 3D multi-parameter images of brain tissue

    PubMed Central

    Bjornsson, Christopher S; Lin, Gang; Al-Kofahi, Yousef; Narayanaswamy, Arunachalam; Smith, Karen L; Shain, William; Roysam, Badrinath

    2009-01-01

    Brain structural complexity has confounded prior efforts to extract quantitative image-based measurements. We present a systematic ‘divide and conquer’ methodology for analyzing three-dimensional (3D) multi-parameter images of brain tissue to delineate and classify key structures, and compute quantitative associations among them. To demonstrate the method, thick (~100 μm) slices of rat brain tissue were labeled using 3 – 5 fluorescent signals, and imaged using spectral confocal microscopy and unmixing algorithms. Automated 3D segmentation and tracing algorithms were used to delineate cell nuclei, vasculature, and cell processes. From these segmentations, a set of 23 intrinsic and 8 associative image-based measurements was computed for each cell. These features were used to classify astrocytes, microglia, neurons, and endothelial cells. Associations among cells and between cells and vasculature were computed and represented as graphical networks to enable further analysis. The automated results were validated using a graphical interface that permits investigator inspection and corrective editing of each cell in 3D. Nuclear counting accuracy was >89%, and cell classification accuracy ranged from 81–92% depending on cell type. We present a software system named FARSIGHT implementing our methodology. Its output is a detailed XML file containing measurements that may be used for diverse quantitative hypothesis-driven and exploratory studies of the central nervous system. PMID:18294697

  11. Integral-geometry characterization of photobiomodulation effects on retinal vessel morphology

    PubMed Central

    Barbosa, Marconi; Natoli, Riccardo; Valter, Kriztina; Provis, Jan; Maddess, Ted

    2014-01-01

    The morphological characterization of quasi-planar structures represented by gray-scale images is challenging when object identification is sub-optimal due to registration artifacts. We propose two alternative procedures that enhances object identification in the integral-geometry morphological image analysis (MIA) framework. The first variant streamlines the framework by introducing an active contours segmentation process whose time step is recycled as a multi-scale parameter. In the second variant, we used the refined object identification produced in the first variant to perform the standard MIA with exact dilation radius as multi-scale parameter. Using this enhanced MIA we quantify the extent of vaso-obliteration in oxygen-induced retinopathic vascular growth, the preventative effect (by photobiomodulation) of exposure during tissue development to near-infrared light (NIR, 670 nm), and the lack of adverse effects due to exposure to NIR light. PMID:25071966

  12. Differentiation of Low- and High-Grade Pediatric Brain Tumors with High b-Value Diffusion-weighted MR Imaging and a Fractional Order Calculus Model

    PubMed Central

    Sui, Yi; Wang, He; Liu, Guanzhong; Damen, Frederick W.; Wanamaker, Christian; Li, Yuhua

    2015-01-01

    Purpose To demonstrate that a new set of parameters (D, β, and μ) from a fractional order calculus (FROC) diffusion model can be used to improve the accuracy of MR imaging for differentiating among low- and high-grade pediatric brain tumors. Materials and Methods The institutional review board of the performing hospital approved this study, and written informed consent was obtained from the legal guardians of pediatric patients. Multi-b-value diffusion-weighted magnetic resonance (MR) imaging was performed in 67 pediatric patients with brain tumors. Diffusion coefficient D, fractional order parameter β (which correlates with tissue heterogeneity), and a microstructural quantity μ were calculated by fitting the multi-b-value diffusion-weighted images to an FROC model. D, β, and μ values were measured in solid tumor regions, as well as in normal-appearing gray matter as a control. These values were compared between the low- and high-grade tumor groups by using the Mann-Whitney U test. The performance of FROC parameters for differentiating among patient groups was evaluated with receiver operating characteristic (ROC) analysis. Results None of the FROC parameters exhibited significant differences in normal-appearing gray matter (P ≥ .24), but all showed a significant difference (P < .002) between low- (D, 1.53 μm2/msec ± 0.47; β, 0.87 ± 0.06; μ, 8.67 μm ± 0.95) and high-grade (D, 0.86 μm2/msec ± 0.23; β, 0.73 ± 0.06; μ, 7.8 μm ± 0.70) brain tumor groups. The combination of D and β produced the largest area under the ROC curve (0.962) in the ROC analysis compared with individual parameters (β, 0.943; D,0.910; and μ, 0.763), indicating an improved performance for tumor differentiation. Conclusion The FROC parameters can be used to differentiate between low- and high-grade pediatric brain tumor groups. The combination of FROC parameters or individual parameters may serve as in vivo, noninvasive, and quantitative imaging markers for classifying pediatric brain tumors. © RSNA, 2015 PMID:26035586

  13. Medical image registration based on normalized multidimensional mutual information

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ji, Hongbing; Tong, Ming

    2009-10-01

    Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.

  14. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    PubMed Central

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-01-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate Ki as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting Ki images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit Ki bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source Software for Tomographic Image Reconstruction (STIR) platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced Ki target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D vs. the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in Ki % bias and improved TBR were observed for gPatlak vs. sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior Ki CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging. PMID:27383991

  15. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were observed for gPatlak versus sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior K i CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging.

  16. Computer-aided, multi-modal, and compression diffuse optical studies of breast tissue

    NASA Astrophysics Data System (ADS)

    Busch, David Richard, Jr.

    Diffuse Optical Tomography and Spectroscopy permit measurement of important physiological parameters non-invasively through ˜10 cm of tissue. I have applied these techniques in measurements of human breast and breast cancer. My thesis integrates three loosely connected themes in this context: multi-modal breast cancer imaging, automated data analysis of breast cancer images, and microvascular hemodynamics of breast under compression. As per the first theme, I describe construction, testing, and the initial clinical usage of two generations of imaging systems for simultaneous diffuse optical and magnetic resonance imaging. The second project develops a statistical analysis of optical breast data from many spatial locations in a population of cancers to derive a novel optical signature of malignancy; I then apply this data-derived signature for localization of cancer in additional subjects. Finally, I construct and deploy diffuse optical instrumentation to measure blood content and blood flow during breast compression; besides optics, this research has implications for any method employing breast compression, e.g., mammography.

  17. Rapid Multi-Tracer PET Tumor Imaging With F-FDG and Secondary Shorter-Lived Tracers.

    PubMed

    Black, Noel F; McJames, Scott; Kadrmas, Dan J

    2009-10-01

    Rapid multi-tracer PET, where two to three PET tracers are rapidly scanned with staggered injections, can recover certain imaging measures for each tracer based on differences in tracer kinetics and decay. We previously showed that single-tracer imaging measures can be recovered to a certain extent from rapid dual-tracer (62)Cu - PTSM (blood flow) + (62)Cu - ATSM (hypoxia) tumor imaging. In this work, the feasibility of rapidly imaging (18)F-FDG plus one or two of these shorter-lived secondary tracers was evaluated in the same tumor model. Dynamic PET imaging was performed in four dogs with pre-existing tumors, and the raw scan data was combined to emulate 60 minute long dual- and triple-tracer scans, using the single-tracer scans as gold standards. The multi-tracer data were processed for static (SUV) and kinetic (K(1), K(net)) endpoints for each tracer, followed by linear regression analysis of multi-tracer versus single-tracer results. Static and quantitative dynamic imaging measures of FDG were both accurately recovered from the multi-tracer scans, closely matching the single-tracer FDG standards (R > 0.99). Quantitative blood flow information, as measured by PTSM K(1) and SUV, was also accurately recovered from the multi-tracer scans (R = 0.97). Recovery of ATSM kinetic parameters proved more difficult, though the ATSM SUV was reasonably well recovered (R = 0.92). We conclude that certain additional information from one to two shorter-lived PET tracers may be measured in a rapid multi-tracer scan alongside FDG without compromising the assessment of glucose metabolism. Such additional and complementary information has the potential to improve tumor characterization in vivo, warranting further investigation of rapid multi-tracer techniques.

  18. Rapid Multi-Tracer PET Tumor Imaging With 18F-FDG and Secondary Shorter-Lived Tracers

    PubMed Central

    Black, Noel F.; McJames, Scott; Kadrmas, Dan J.

    2009-01-01

    Rapid multi-tracer PET, where two to three PET tracers are rapidly scanned with staggered injections, can recover certain imaging measures for each tracer based on differences in tracer kinetics and decay. We previously showed that single-tracer imaging measures can be recovered to a certain extent from rapid dual-tracer 62Cu – PTSM (blood flow) + 62Cu — ATSM (hypoxia) tumor imaging. In this work, the feasibility of rapidly imaging 18F-FDG plus one or two of these shorter-lived secondary tracers was evaluated in the same tumor model. Dynamic PET imaging was performed in four dogs with pre-existing tumors, and the raw scan data was combined to emulate 60 minute long dual- and triple-tracer scans, using the single-tracer scans as gold standards. The multi-tracer data were processed for static (SUV) and kinetic (K1, Knet) endpoints for each tracer, followed by linear regression analysis of multi-tracer versus single-tracer results. Static and quantitative dynamic imaging measures of FDG were both accurately recovered from the multi-tracer scans, closely matching the single-tracer FDG standards (R > 0.99). Quantitative blood flow information, as measured by PTSM K1 and SUV, was also accurately recovered from the multi-tracer scans (R = 0.97). Recovery of ATSM kinetic parameters proved more difficult, though the ATSM SUV was reasonably well recovered (R = 0.92). We conclude that certain additional information from one to two shorter-lived PET tracers may be measured in a rapid multi-tracer scan alongside FDG without compromising the assessment of glucose metabolism. Such additional and complementary information has the potential to improve tumor characterization in vivo, warranting further investigation of rapid multi-tracer techniques. PMID:20046800

  19. Application of separable parameter space techniques to multi-tracer PET compartment modeling.

    PubMed

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-02-07

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  20. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  1. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  2. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  3. Parallel and Efficient Sensitivity Analysis of Microscopy Image Segmentation Workflows in Hybrid Systems

    PubMed Central

    Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel

    2017-01-01

    We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725

  4. Imaging quality analysis of multi-channel scanning radiometer

    NASA Astrophysics Data System (ADS)

    Fan, Hong; Xu, Wujun; Wang, Chengliang

    2008-03-01

    Multi-channel scanning radiometer, on boarding FY-2 geostationary meteorological satellite, plays a key role in remote sensing because of its wide field of view and continuous multi-spectral images acquirements. It is significant to evaluate image quality after performance parameters of the imaging system are validated. Several methods of evaluating imaging quality are discussed. Of these methods, the most fundamental is the MTF. The MTF of photoelectric scanning remote instrument, in the scanning direction, is the multiplication of optics transfer function (OTF), detector transfer function (DTF) and electronics transfer function (ETF). For image motion compensation, moving speed of scanning mirror should be considered. The optical MTF measurement is performed in both the EAST/WEST and NORTH/SOUTH direction, whose values are used for alignment purposes and are used to determine the general health of the instrument during integration and testing. Imaging systems cannot perfectly reproduce what they see and end up "blurring" the image. Many parts of the imaging system can cause blurring. Among these are the optical elements, the sampling of the detector itself, post-processing, or the earth's atmosphere for systems that image through it. Through theory calculation and actual measurement, it is proved that DTF and ETF are the main factors of system MTF and the imaging quality can satisfy the requirement of instrument design.

  5. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    PubMed Central

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  6. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    PubMed

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  7. A multi-scale convolutional neural network for phenotyping high-content cellular images.

    PubMed

    Godinez, William J; Hossain, Imtiaz; Lazic, Stanley E; Davies, John W; Zhang, Xian

    2017-07-01

    Identifying phenotypes based on high-content cellular images is challenging. Conventional image analysis pipelines for phenotype identification comprise multiple independent steps, with each step requiring method customization and adjustment of multiple parameters. Here, we present an approach based on a multi-scale convolutional neural network (M-CNN) that classifies, in a single cohesive step, cellular images into phenotypes by using directly and solely the images' pixel intensity values. The only parameters in the approach are the weights of the neural network, which are automatically optimized based on training images. The approach requires no a priori knowledge or manual customization, and is applicable to single- or multi-channel images displaying single or multiple cells. We evaluated the classification performance of the approach on eight diverse benchmark datasets. The approach yielded overall a higher classification accuracy compared with state-of-the-art results, including those of other deep CNN architectures. In addition to using the network to simply obtain a yes-or-no prediction for a given phenotype, we use the probability outputs calculated by the network to quantitatively describe the phenotypes. This study shows that these probability values correlate with chemical treatment concentrations. This finding validates further our approach and enables chemical treatment potency estimation via CNNs. The network specifications and solver definitions are provided in Supplementary Software 1. william_jose.godinez_navarro@novartis.com or xian-1.zhang@novartis.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. Multicolor Super-Resolution Fluorescence Imaging via Multi-Parameter Fluorophore Detection

    PubMed Central

    Bates, Mark; Dempsey, Graham T; Chen, Kok Hao; Zhuang, Xiaowei

    2012-01-01

    Understanding the complexity of the cellular environment will benefit from the ability to unambiguously resolve multiple cellular components, simultaneously and with nanometer-scale spatial resolution. Multicolor super-resolution fluorescence microscopy techniques have been developed to achieve this goal, yet challenges remain in terms of the number of targets that can be simultaneously imaged and the crosstalk between color channels. Herein, we demonstrate multicolor stochastic optical reconstruction microscopy (STORM) based on a multi-parameter detection strategy, which uses both the fluorescence activation wavelength and the emission color to discriminate between photo-activatable fluorescent probes. First, we obtained two-color super-resolution images using the near-infrared cyanine dye Alexa 750 in conjunction with a red cyanine dye Alexa 647, and quantified color crosstalk levels and image registration accuracy. Combinatorial pairing of these two switchable dyes with fluorophores which enhance photo-activation enabled multi-parameter detection of six different probes. Using this approach, we obtained six-color super-resolution fluorescence images of a model sample. The combination of multiple fluorescence detection parameters for improved fluorophore discrimination promises to substantially enhance our ability to visualize multiple cellular targets with sub-diffraction-limit resolution. PMID:22213647

  9. Efficient geometric rectification techniques for spectral analysis algorithm

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Pang, S. S.; Curlander, J. C.

    1992-01-01

    The spectral analysis algorithm is a viable technique for processing synthetic aperture radar (SAR) data in near real time throughput rates by trading the image resolution. One major challenge of the spectral analysis algorithm is that the output image, often referred to as the range-Doppler image, is represented in the iso-range and iso-Doppler lines, a curved grid format. This phenomenon is known to be the fanshape effect. Therefore, resampling is required to convert the range-Doppler image into a rectangular grid format before the individual images can be overlaid together to form seamless multi-look strip imagery. An efficient algorithm for geometric rectification of the range-Doppler image is presented. The proposed algorithm, realized in two one-dimensional resampling steps, takes into consideration the fanshape phenomenon of the range-Doppler image as well as the high squint angle and updates of the cross-track and along-track Doppler parameters. No ground reference points are required.

  10. Image degradation characteristics and restoration based on regularization for diffractive imaging

    NASA Astrophysics Data System (ADS)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  11. Depth-resolved imaging of colon tumor using optical coherence tomography and fluorescence laminar optical tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tang, Qinggong; Frank, Aaron; Wang, Jianting; Chen, Chao-wei; Jin, Lily; Lin, Jon; Chan, Joanne M.; Chen, Yu

    2016-03-01

    Early detection of neoplastic changes remains a critical challenge in clinical cancer diagnosis and treatment. Many cancers arise from epithelial layers such as those of the gastrointestinal (GI) tract. Current standard endoscopic technology is unable to detect those subsurface lesions. Since cancer development is associated with both morphological and molecular alterations, imaging technologies that can quantitative image tissue's morphological and molecular biomarkers and assess the depth extent of a lesion in real time, without the need for tissue excision, would be a major advance in GI cancer diagnostics and therapy. In this research, we investigated the feasibility of multi-modal optical imaging including high-resolution optical coherence tomography (OCT) and depth-resolved high-sensitivity fluorescence laminar optical tomography (FLOT) for structural and molecular imaging. APC (adenomatous polyposis coli) mice model were imaged using OCT and FLOT and the correlated histopathological diagnosis was obtained. Quantitative structural (the scattering coefficient) and molecular imaging parameters (fluorescence intensity) from OCT and FLOT images were developed for multi-parametric analysis. This multi-modal imaging method has demonstrated the feasibility for more accurate diagnosis with 87.4% (87.3%) for sensitivity (specificity) which gives the most optimal diagnosis (the largest area under receiver operating characteristic (ROC) curve). This project results in a new non-invasive multi-modal imaging platform for improved GI cancer detection, which is expected to have a major impact on detection, diagnosis, and characterization of GI cancers, as well as a wide range of epithelial cancers.

  12. A versatile pipeline for the multi-scale digital reconstruction and quantitative analysis of 3D tissue architecture

    PubMed Central

    Morales-Navarrete, Hernán; Segovia-Miranda, Fabián; Klukowski, Piotr; Meyer, Kirstin; Nonaka, Hidenori; Marsico, Giovanni; Chernykh, Mikhail; Kalaidzidis, Alexander; Zerial, Marino; Kalaidzidis, Yannis

    2015-01-01

    A prerequisite for the systems biology analysis of tissues is an accurate digital three-dimensional reconstruction of tissue structure based on images of markers covering multiple scales. Here, we designed a flexible pipeline for the multi-scale reconstruction and quantitative morphological analysis of tissue architecture from microscopy images. Our pipeline includes newly developed algorithms that address specific challenges of thick dense tissue reconstruction. Our implementation allows for a flexible workflow, scalable to high-throughput analysis and applicable to various mammalian tissues. We applied it to the analysis of liver tissue and extracted quantitative parameters of sinusoids, bile canaliculi and cell shapes, recognizing different liver cell types with high accuracy. Using our platform, we uncovered an unexpected zonation pattern of hepatocytes with different size, nuclei and DNA content, thus revealing new features of liver tissue organization. The pipeline also proved effective to analyse lung and kidney tissue, demonstrating its generality and robustness. DOI: http://dx.doi.org/10.7554/eLife.11214.001 PMID:26673893

  13. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    PubMed Central

    Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888

  14. Analysis of multi-channel microscopy: Spectral self-interference, multi-detector confocal and 4Pi systems

    NASA Astrophysics Data System (ADS)

    Davis, Brynmor J.

    Fluorescence microscopy is an important and ubiquitous tool in biological imaging due to the high specificity with which fluorescent molecules can be attached to an organism and the subsequent nondestructive in-vivo imaging allowed. Focused-light microscopies allow three-dimensional fluorescence imaging but their resolution is restricted by diffraction. This effect is particularly limiting in the axial dimension as the diffraction-limited focal volume produced by a lens is more extensive along the optical axis than perpendicular to it. Approaches such as confocal microscopy and 4Pi microscopy have been developed to improve the axial resolution. Spectral Self-Interference Fluorescence Microscopy (SSFM) is another high-axial-resolution technique and is the principal subject of this dissertation. Nanometer-precision localization of a single fluorescent layer has been demonstrated using SSFM. This accuracy compares favorably with the axial resolutions given by confocal and 4Pi systems at similar operating parameters (these resolutions are approximately 350nm and 80nm respectively). This theoretical work analyzes the expected performance of the SSFM system when imaging a general object, i.e. an arbitrary fluorophore density function rather than a single layer. An existing model of SSFM is used in simulations to characterize the system's resolution. Several statistically-based reconstruction methods are applied to show that the expected resolution for SSFM is similar to 4Pi microscopy for a general object but does give very high localization accuracy when the object is known to consist of a limited number of layers. SSFM is then analyzed in a linear systems framework and shown to have strong connections, both physically and mathematically, to a multi-channel 4Pi microscope. Fourier-domain analysis confirms that SSFM cannot be expected to outperform this multi-channel 4Pi instrument. Differences between the channels in spatial-scanning, multi-channel microscopies are then exploited to show that such instruments can operate at a sub-Nyquist scanning rate but still produce images largely free of aliasing effects. Multi-channel analysis is also used to show how light typically discarded in confocal and 4Pi systems can be collected and usefully incorporated into the measured image.

  15. In vivo imaging of scattering and absorption properties of exposed brain using a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Yoshida, Keiichiro; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu

    2014-03-01

    We investigate a method to estimate the spectral images of reduced scattering coefficients and the absorption coefficients of in vivo exposed brain tissues in the range from visible to near-infrared wavelength (500-760 nm) based on diffuse reflectance spectroscopy using a digital RGB camera. In the proposed method, the multi-spectral reflectance images of in vivo exposed brain are reconstructed from the digital red, green blue images using the Wiener estimation algorithm. The Monte Carlo simulation-based multiple regression analysis for the absorbance spectra is then used to specify the absorption and scattering parameters of brain tissue. In this analysis, the concentration of oxygenated hemoglobin and that of deoxygenated hemoglobin are estimated as the absorption parameters whereas the scattering amplitude a and the scattering power b in the expression of μs'=aλ-b as the scattering parameters, respectively. The spectra of absorption and reduced scattering coefficients are reconstructed from the absorption and scattering parameters, and finally, the spectral images of absorption and reduced scattering coefficients are estimated. The estimated images of absorption coefficients were dominated by the spectral characteristics of hemoglobin. The estimated spectral images of reduced scattering coefficients showed a broad scattering spectrum, exhibiting larger magnitude at shorter wavelengths, corresponding to the typical spectrum of brain tissue published in the literature. In vivo experiments with exposed brain of rats during CSD confirmed the possibility of the method to evaluate both hemodynamics and changes in tissue morphology due to electrical depolarization.

  16. Multi-Modal Nano-Probes for Radionuclide and 5-color Near Infrared Optical Lymphatic Imaging

    PubMed Central

    Kobayashi, Hisataka; Koyama, Yoshinori; Barrett, Tristan; Hama, Yukihiro; Regino, Celeste A. S.; Shin, In Soo; Jang, Beom-Su; Le, Nhat; Paik, Chang H.; Choyke, Peter L.; Urano, Yasuteru

    2008-01-01

    Current contrast agents generally have one function and can only be imaged in monochrome, therefore, the majority of imaging methods can only impart uniparametric information. A single nano-particle has the potential to be loaded with multiple payloads. Such multi-modality probes have the ability to be imaged by more than one imaging technique, which could compensate for the weakness or even combine the advantages of each individual modality. Furthermore, optical imaging using different optical probes enables us to achieve multi-color in vivo imaging, wherein multiple parameters can be read from a single image. To allow differentiation of multiple optical signals in vivo, each probe should have a close but different near infrared emission. To this end, we synthesized nano-probes with multi-modal and multi-color potential, which employed a polyamidoamine dendrimer platform linked to both radionuclides and optical probes, permitting dual-modality scintigraphic and 5-color near infrared optical lymphatic imaging using a multiple excitation spectrally-resolved fluorescence imaging technique. PMID:19079788

  17. An open, multi-vendor, multi-field-strength brain MR dataset and analysis of publicly available skull stripping methods agreement.

    PubMed

    Souza, Roberto; Lucena, Oeslle; Garrafa, Julia; Gobbi, David; Saluzzi, Marina; Appenzeller, Simone; Rittner, Letícia; Frayne, Richard; Lotufo, Roberto

    2018-04-15

    This paper presents an open, multi-vendor, multi-field strength magnetic resonance (MR) T1-weighted volumetric brain imaging dataset, named Calgary-Campinas-359 (CC-359). The dataset is composed of images of older healthy adults (29-80 years) acquired on scanners from three vendors (Siemens, Philips and General Electric) at both 1.5 T and 3 T. CC-359 is comprised of 359 datasets, approximately 60 subjects per vendor and magnetic field strength. The dataset is approximately age and gender balanced, subject to the constraints of the available images. It provides consensus brain extraction masks for all volumes generated using supervised classification. Manual segmentation results for twelve randomly selected subjects performed by an expert are also provided. The CC-359 dataset allows investigation of 1) the influences of both vendor and magnetic field strength on quantitative analysis of brain MR; 2) parameter optimization for automatic segmentation methods; and potentially 3) machine learning classifiers with big data, specifically those based on deep learning methods, as these approaches require a large amount of data. To illustrate the utility of this dataset, we compared to the results of a supervised classifier, the results of eight publicly available skull stripping methods and one publicly available consensus algorithm. A linear mixed effects model analysis indicated that vendor (p-value<0.001) and magnetic field strength (p-value<0.001) have statistically significant impacts on skull stripping results. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Normal values and standardization of parameters in nuclear cardiology: Japanese Society of Nuclear Medicine working group database.

    PubMed

    Nakajima, Kenichi; Matsumoto, Naoya; Kasai, Tokuo; Matsuo, Shinro; Kiso, Keisuke; Okuda, Koichi

    2016-04-01

    As a 2-year project of the Japanese Society of Nuclear Medicine working group activity, normal myocardial imaging databases were accumulated and summarized. Stress-rest with gated and non-gated image sets were accumulated for myocardial perfusion imaging and could be used for perfusion defect scoring and normal left ventricular (LV) function analysis. For single-photon emission computed tomography (SPECT) with multi-focal collimator design, databases of supine and prone positions and computed tomography (CT)-based attenuation correction were created. The CT-based correction provided similar perfusion patterns between genders. In phase analysis of gated myocardial perfusion SPECT, a new approach for analyzing dyssynchrony, normal ranges of parameters for phase bandwidth, standard deviation and entropy were determined in four software programs. Although the results were not interchangeable, dependency on gender, ejection fraction and volumes were common characteristics of these parameters. Standardization of (123)I-MIBG sympathetic imaging was performed regarding heart-to-mediastinum ratio (HMR) using a calibration phantom method. The HMRs from any collimator types could be converted to the value with medium-energy comparable collimators. Appropriate quantification based on common normal databases and standard technology could play a pivotal role for clinical practice and researches.

  19. Digital radiography: optimization of image quality and dose using multi-frequency software.

    PubMed

    Precht, H; Gerke, O; Rosendahl, K; Tingberg, A; Waaler, D

    2012-09-01

    New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality.

  20. Towards Personalized Cardiology: Multi-Scale Modeling of the Failing Heart

    PubMed Central

    Amr, Ali; Neumann, Dominik; Georgescu, Bogdan; Seegerer, Philipp; Kamen, Ali; Haas, Jan; Frese, Karen S.; Irawati, Maria; Wirsz, Emil; King, Vanessa; Buss, Sebastian; Mereles, Derliz; Zitron, Edgar; Keller, Andreas; Katus, Hugo A.; Comaniciu, Dorin; Meder, Benjamin

    2015-01-01

    Background Despite modern pharmacotherapy and advanced implantable cardiac devices, overall prognosis and quality of life of HF patients remain poor. This is in part due to insufficient patient stratification and lack of individualized therapy planning, resulting in less effective treatments and a significant number of non-responders. Methods and Results State-of-the-art clinical phenotyping was acquired, including magnetic resonance imaging (MRI) and biomarker assessment. An individualized, multi-scale model of heart function covering cardiac anatomy, electrophysiology, biomechanics and hemodynamics was estimated using a robust framework. The model was computed on n=46 HF patients, showing for the first time that advanced multi-scale models can be fitted consistently on large cohorts. Novel multi-scale parameters derived from the model of all cases were analyzed and compared against clinical parameters, cardiac imaging, lab tests and survival scores to evaluate the explicative power of the model and its potential for better patient stratification. Model validation was pursued by comparing clinical parameters that were not used in the fitting process against model parameters. Conclusion This paper illustrates how advanced multi-scale models can complement cardiovascular imaging and how they could be applied in patient care. Based on obtained results, it becomes conceivable that, after thorough validation, such heart failure models could be applied for patient management and therapy planning in the future, as we illustrate in one patient of our cohort who received CRT-D implantation. PMID:26230546

  1. Collagen morphology and texture analysis: from statistics to classification

    PubMed Central

    Mostaço-Guidolin, Leila B.; Ko, Alex C.-T.; Wang, Fei; Xiang, Bo; Hewko, Mark; Tian, Ganghong; Major, Arkady; Shiomi, Masashi; Sowa, Michael G.

    2013-01-01

    In this study we present an image analysis methodology capable of quantifying morphological changes in tissue collagen fibril organization caused by pathological conditions. Texture analysis based on first-order statistics (FOS) and second-order statistics such as gray level co-occurrence matrix (GLCM) was explored to extract second-harmonic generation (SHG) image features that are associated with the structural and biochemical changes of tissue collagen networks. Based on these extracted quantitative parameters, multi-group classification of SHG images was performed. With combined FOS and GLCM texture values, we achieved reliable classification of SHG collagen images acquired from atherosclerosis arteries with >90% accuracy, sensitivity and specificity. The proposed methodology can be applied to a wide range of conditions involving collagen re-modeling, such as in skin disorders, different types of fibrosis and muscular-skeletal diseases affecting ligaments and cartilage. PMID:23846580

  2. A Real Time System for Multi-Sensor Image Analysis through Pyramidal Segmentation

    DTIC Science & Technology

    1992-01-30

    A Real Time Syte for M~ulti- sensor Image Analysis S. E I0 through Pyramidal Segmentation/ / c •) L. Rudin, S. Osher, G. Koepfler, J.9. Morel 7. ytu...experiments with reconnaissance photography, multi- sensor satellite imagery, medical CT and MRI multi-band data have shown a great practi- cal potential...C ,SF _/ -- / WSM iS-I-0-d41-40450 $tltwt, kw" I (nor.- . Z-97- A real-time system for multi- sensor image analysis through pyramidal segmentation

  3. Association between pathology and texture features of multi parametric MRI of the prostate

    NASA Astrophysics Data System (ADS)

    Kuess, Peter; Andrzejewski, Piotr; Nilsson, David; Georg, Petra; Knoth, Johannes; Susani, Martin; Trygg, Johan; Helbich, Thomas H.; Polanec, Stephan H.; Georg, Dietmar; Nyholm, Tufve

    2017-10-01

    The role of multi-parametric (mp)MRI in the diagnosis and treatment of prostate cancer has increased considerably. An alternative to visual inspection of mpMRI is the evaluation using histogram-based (first order statistics) parameters and textural features (second order statistics). The aims of the present work were to investigate the relationship between benign and malignant sub-volumes of the prostate and textures obtained from mpMR images. The performance of tumor prediction was investigated based on the combination of histogram-based and textural parameters. Subsequently, the relative importance of mpMR images was assessed and the benefit of additional imaging analyzed. Finally, sub-structures based on the PI-RADS classification were investigated as potential regions to automatically detect maligned lesions. Twenty-five patients who received mpMRI prior to radical prostatectomy were included in the study. The imaging protocol included T2, DWI, and DCE. Delineation of tumor regions was performed based on pathological information. First and second order statistics were derived from each structure and for all image modalities. The resulting data were processed with multivariate analysis, using PCA (principal component analysis) and OPLS-DA (orthogonal partial least squares discriminant analysis) for separation of malignant and healthy tissue. PCA showed a clear difference between tumor and healthy regions in the peripheral zone for all investigated images. The predictive ability of the OPLS-DA models increased for all image modalities when first and second order statistics were combined. The predictive value reached a plateau after adding ADC and T2, and did not increase further with the addition of other image information. The present study indicates a distinct difference in the signatures between malign and benign prostate tissue. This is an absolute prerequisite for automatic tumor segmentation, but only the first step in that direction. For the specific identified signature, DCE did not add complementary information to T2 and ADC maps.

  4. Quantification of left ventricular functional parameter values using 3D spiral bSSFP and through-time non-Cartesian GRAPPA.

    PubMed

    Barkauskas, Kestutis J; Rajiah, Prabhakar; Ashwath, Ravi; Hamilton, Jesse I; Chen, Yong; Ma, Dan; Wright, Katherine L; Gulani, Vikas; Griswold, Mark A; Seiberlich, Nicole

    2014-09-11

    The standard clinical acquisition for left ventricular functional parameter analysis with cardiovascular magnetic resonance (CMR) uses a multi-breathhold multi-slice segmented balanced SSFP sequence. Performing multiple long breathholds in quick succession for ventricular coverage in the short-axis orientation can lead to fatigue and is challenging in patients with severe cardiac or respiratory disorders. This study combines the encoding efficiency of a six-fold undersampled 3D stack of spirals balanced SSFP sequence with 3D through-time spiral GRAPPA parallel imaging reconstruction. This 3D spiral method requires only one breathhold to collect the dynamic data. Ten healthy volunteers were recruited for imaging at 3 T. The 3D spiral technique was compared against 2D imaging in terms of systolic left ventricular functional parameter values (Bland-Altman plots), total scan time (Welch's t-test) and qualitative image rating scores (Wilcoxon signed-rank test). Systolic left ventricular functional values were not significantly different (i.e. 3D-2D) between the methods. The 95% confidence interval for ejection fraction was -0.1 ± 1.6% (mean ± 1.96*SD). The total scan time for the 3D spiral technique was 48 s, which included one breathhold with an average duration of 14 s for the dynamic scan, plus 34 s to collect the calibration data under free-breathing conditions. The 2D method required an average of 5 min 40s for the same coverage of the left ventricle. The difference between 3D and 2D image rating scores was significantly different from zero (Wilcoxon signed-rank test, p < 0.05); however, the scores were at least 3 (i.e. average) or higher for 3D spiral imaging. The 3D through-time spiral GRAPPA method demonstrated equivalent systolic left ventricular functional parameter values, required significantly less total scan time and yielded acceptable image quality with respect to the 2D segmented multi-breathhold standard in this study. Moreover, the 3D spiral technique used just one breathhold for dynamic imaging, which is anticipated to reduce patient fatigue as part of the complete cardiac examination in future studies that include patients.

  5. Analytical performance bounds for multi-tensor diffusion-MRI.

    PubMed

    Ahmed Sid, Farid; Abed-Meraim, Karim; Harba, Rachid; Oulebsir-Boumghar, Fatima

    2017-02-01

    To examine the effects of MR acquisition parameters on brain white matter fiber orientation estimation and parameter of clinical interest in crossing fiber areas based on the Multi-Tensor Model (MTM). We compute the Cramér-Rao Bound (CRB) for the MTM and the parameter of clinical interest such as the Fractional Anisotropy (FA) and the dominant fiber orientations, assuming that the diffusion MRI data are recorded by a multi-coil, multi-shell acquisition system. Considering the sum-of-squares method for the reconstructed magnitude image, we introduce an approximate closed-form formula for Fisher Information Matrix that has the simplicity and easy interpretation advantages. In addition, we propose to generalize the FA and the mean diffusivity to the multi-tensor model. We show the application of the CRB to reduce the scan time while preserving a good estimation precision. We provide results showing how the increase of the number of acquisition coils compensates the decrease of the number of diffusion gradient directions. We analyze the impact of the b-value and the Signal-to-Noise Ratio (SNR). The analysis shows that the estimation error variance decreases with a quadratic rate with the SNR, and that the optimum b-values are not unique but depend on the target parameter, the context, and eventually the target cost function. In this study we highlight the importance of choosing the appropriate acquisition parameters especially when dealing with crossing fiber areas. We also provide a methodology for the optimal tuning of these parameters using the CRB. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Feature selection and classification of multiparametric medical images using bagging and SVM

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Resnick, Susan M.; Davatzikos, Christos

    2008-03-01

    This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.

  7. Temporary morphological changes in plus disease induced during contact digital imaging

    PubMed Central

    Zepeda-Romero, L C; Martinez-Perez, M E; Ruiz-Velasco, S; Ramirez-Ortiz, M A; Gutierrez-Padilla, J A

    2011-01-01

    Objective To compare and quantify the retinal vascular changes induced by non-intentional pressure contact by digital handheld camera during retinopathy of prematurity (ROP) imaging by means of a computer-based image analysis system, Retinal Image multiScale Analysis. Methods A set of 10 wide-angle retinal pairs of photographs per patient, who underwent routine ROP examinations, was measured. Vascular trees were matched between ‘compression artifact' (absence of the vascular column at the optic nerve) and ‘not compression artifact' conditions. Parameters were analyzed using a two-level linear model for each individual parameter for arterioles and venules separately: integrated curvature (IC), diameter (d), and tortuosity index (TI). Results Images affected with compression artifact showed significant vascular d (P<0.01) changes in both arteries and veins, as well as in artery IC (P<0.05). Vascular TI remained unchanged in both groups. Conclusions Non-adverted corneal pressure with the RetCam lens could compress and decrease intra-arterial diameter or even collapse retinal vessels. Careful attention to technique is essential to avoid absence of the arterial blood column at the optic nerve head that is indicative of increased pressure during imaging. PMID:21760627

  8. THE SPITZER SURVEY OF STELLAR STRUCTURE IN GALAXIES (S{sup 4}G): MULTI-COMPONENT DECOMPOSITION STRATEGIES AND DATA RELEASE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salo, Heikki; Laurikainen, Eija; Laine, Jarkko

    The Spitzer Survey of Stellar Structure in Galaxies (S{sup 4}G) is a deep 3.6 and 4.5 μm imaging survey of 2352 nearby (<40 Mpc) galaxies. We describe the S{sup 4}G data analysis pipeline 4, which is dedicated to two-dimensional structural surface brightness decompositions of 3.6 μm images, using GALFIT3.0. Besides automatic 1-component Sérsic fits, and 2-component Sérsic bulge + exponential disk fits, we present human-supervised multi-component decompositions, which include, when judged appropriate, a central point source, bulge, disk, and bar components. Comparison of the fitted parameters indicates that multi-component models are needed to obtain reliable estimates for the bulge Sérsicmore » index and bulge-to-total light ratio (B/T), confirming earlier results. Here, we describe the preparations of input data done for decompositions, give examples of our decomposition strategy, and describe the data products released via IRSA and via our web page (www.oulu.fi/astronomy/S4G-PIPELINE4/MAIN). These products include all the input data and decomposition files in electronic form, making it easy to extend the decompositions to suit specific science purposes. We also provide our IDL-based visualization tools (GALFIDL) developed for displaying/running GALFIT-decompositions, as well as our mask editing procedure (MASK-EDIT) used in data preparation. A detailed analysis of the bulge, disk, and bar parameters derived from multi-component decompositions will be published separately.« less

  9. Spectral and spatial resolution analysis of multi sensor satellite data for coral reef mapping: Tioman Island, Malaysia

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet; Kabiri, Keivan

    2012-07-01

    This paper describes an assessment of coral reef mapping using multi sensor satellite images such as Landsat ETM, SPOT and IKONOS images for Tioman Island, Malaysia. The study area is known to be one of the best Islands in South East Asia for its unique collection of diversified coral reefs and serves host to thousands of tourists every year. For the coral reef identification, classification and analysis, Landsat ETM, SPOT and IKONOS images were collected processed and classified using hierarchical classification schemes. At first, Decision tree classification method was implemented to separate three main land cover classes i.e. water, rural and vegetation and then maximum likelihood supervised classification method was used to classify these main classes. The accuracy of the classification result is evaluated by a separated test sample set, which is selected based on the fieldwork survey and view interpretation from IKONOS image. Few types of ancillary data in used are: (a) DGPS ground control points; (b) Water quality parameters measured by Hydrolab DS4a; (c) Sea-bed substrates spectrum measured by Unispec and; (d) Landcover observation photos along Tioman island coastal area. The overall accuracy of the final classification result obtained was 92.25% with the kappa coefficient is 0.8940. Key words: Coral reef, Multi-spectral Segmentation, Pixel-Based Classification, Decision Tree, Tioman Island

  10. The design and application of a multi-band IR imager

    NASA Astrophysics Data System (ADS)

    Li, Lijuan

    2018-02-01

    Multi-band IR imaging system has many applications in security, national defense, petroleum and gas industry, etc. So the relevant technologies are getting more and more attention in rent years. As we know, when used in missile warning and missile seeker systems, multi-band IR imaging technology has the advantage of high target recognition capability and low false alarm rate if suitable spectral bands are selected. Compared with traditional single band IR imager, multi-band IR imager can make use of spectral features in addition to space and time domain features to discriminate target from background clutters and decoys. So, one of the key work is to select the right spectral bands in which the feature difference between target and false target is evident and is well utilized. Multi-band IR imager is a useful instrument to collect multi-band IR images of target, backgrounds and decoys for spectral band selection study at low cost and with adjustable parameters and property compared with commercial imaging spectrometer. In this paper, a multi-band IR imaging system is developed which is suitable to collect 4 spectral band images of various scenes at every turn and can be expanded to other short-wave and mid-wave IR spectral bands combination by changing filter groups. The multi-band IR imaging system consists of a broad band optical system, a cryogenic InSb large array detector, a spinning filter wheel and electronic processing system. The multi-band IR imaging system's performance is tested in real data collection experiments.

  11. Comparative Evaluation of Registration Algorithms in Different Brain Databases With Varying Difficulty: Results and Insights

    PubMed Central

    Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos

    2015-01-01

    Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685

  12. Fractal analysis of INSAR and correlation with graph-cut based image registration for coastline deformation analysis: post seismic hazard assessment of the 2011 Tohoku earthquake region

    NASA Astrophysics Data System (ADS)

    Dutta, P. K.; Mishra, O. P.

    2012-04-01

    Satellite imagery for 2011 earthquake off the Pacific coast of Tohoku has provided an opportunity to conduct image transformation analyses by employing multi-temporal images retrieval techniques. In this study, we used a new image segmentation algorithm to image coastline deformation by adopting graph cut energy minimization framework. Comprehensive analysis of available INSAR images using coastline deformation analysis helped extract disaster information of the affected region of the 2011 Tohoku tsunamigenic earthquake source zone. We attempted to correlate fractal analysis of seismic clustering behavior with image processing analogies and our observations suggest that increase in fractal dimension distribution is associated with clustering of events that may determine the level of devastation of the region. The implementation of graph cut based image registration technique helps us to detect the devastation across the coastline of Tohoku through change of intensity of pixels that carries out regional segmentation for the change in coastal boundary after the tsunami. The study applies transformation parameters on remotely sensed images by manually segmenting the image to recovering translation parameter from two images that differ by rotation. Based on the satellite image analysis through image segmentation, it is found that the area of 0.997 sq km for the Honshu region was a maximum damage zone localized in the coastal belt of NE Japan forearc region. The analysis helps infer using matlab that the proposed graph cut algorithm is robust and more accurate than other image registration methods. The analysis shows that the method can give a realistic estimate for recovered deformation fields in pixels corresponding to coastline change which may help formulate the strategy for assessment during post disaster need assessment scenario for the coastal belts associated with damages due to strong shaking and tsunamis in the world under disaster risk mitigation programs.

  13. Retinal oxygen saturation evaluation by multi-spectral fundus imaging

    NASA Astrophysics Data System (ADS)

    Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James

    2007-03-01

    Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work is original and is not under consideration for publication elsewhere.

  14. [Fractional vegetation cover of invasive Spartina alterniflora in coastal wetland using unmanned aerial vehicle (UAV)remote sensing].

    PubMed

    Zhou, Zai Ming; Yang, Yan Ming; Chen, Ben Qing

    2016-12-01

    The effective management and utilization of resources and ecological environment of coastal wetland require investigation and analysis in high precision of the fractional vegetation cover of invasive species Spartina alterniflora. In this study, Sansha Bay was selected as the experimental region, and visible and multi-spectral images obtained by low-altitude UAV in the region were used to monitor the fractional vegetation cover of S. alterniflora. Fractional vegetation cover parameters in the multi-spectral images were then estimated by NDVI index model, and the accuracy was tested against visible images as references. Results showed that vegetation covers of S. alterniflora in the image area were mainly at medium high level (40%-60%) and high level (60%-80%). Root mean square error (RMSE) between the NDVI model estimation values and true values was 0.06, while the determination coefficient R 2 was 0.92, indicating a good consistency between the estimation value and the true value.

  15. Evaluation of Non-Invasive Multispectral Imaging as a Tool for Measuring the Effect of Systemic Therapy in Kaposi Sarcoma

    PubMed Central

    Kainerstorfer, Jana M.; Polizzotto, Mark N.; Uldrick, Thomas S.; Rahman, Rafa; Hassan, Moinuddin; Najafizadeh, Laleh; Ardeshirpour, Yasaman; Wyvill, Kathleen M.; Aleman, Karen; Smith, Paul D.; Yarchoan, Robert; Gandjbakhche, Amir H.

    2013-01-01

    Diffuse multi-spectral imaging has been evaluated as a potential non-invasive marker of tumor response. Multi-spectral images of Kaposi sarcoma skin lesions were taken over the course of treatment, and blood volume and oxygenation concentration maps were obtained through principal component analysis (PCA) of the data. These images were compared with clinical and pathological responses determined by conventional means. We demonstrate that cutaneous lesions have increased blood volume concentration and that changes in this parameter are a reliable indicator of treatment efficacy, differentiating responders and non-responders. Blood volume decreased by at least 20% in all lesions that responded by clinical criteria and increased in the two lesions that did not respond clinically. Responses as assessed by multi-spectral imaging also generally correlated with overall patient clinical response assessment, were often detectable earlier in the course of therapy, and are less subject to observer variability than conventional clinical assessment. Tissue oxygenation was more variable, with lesions often showing decreased oxygenation in the center surrounded by a zone of increased oxygenation. This technique could potentially be a clinically useful supplement to existing response assessment in KS, providing an early, quantitative, and non-invasive marker of treatment effect. PMID:24386302

  16. Visualization of hemodynamics and light scattering in exposed brain of rat using multispectral image reconstruction based on Wiener estimation method

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Ishizuka, Tomohiro; Yoshida, Keiichiro; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu

    2015-07-01

    We investigate a method to estimate the spectral images of reduced scattering coefficients and the absorption coefficients of in vivo exposed brain tissues in the range from visible to near-infrared wavelength (500-760 nm) based on diffuse reflectance spectroscopy using a digital RGB camera. In the proposed method, the multi-spectral reflectance images of in vivo exposed brain are reconstructed from the digital red, green, blue images using the Wiener estimation algorithm. The Monte Carlo simulation-based multiple regression analysis for the absorbance spectra is then used to specify the absorption and scattering parameters of brain tissue. In this analysis, the concentration of oxygenated hemoglobin and that of deoxygenated hemoglobin are estimated as the absorption parameters whereas the scattering amplitude a and the scattering power b in the expression of μs'=aλ-b as the scattering parameters, respectively. The spectra of absorption and reduced scattering coefficients are reconstructed from the absorption and scattering parameters, and finally, the spectral images of absorption and reduced scattering coefficients are estimated. We performed simultaneous recordings of spectral diffuse reflectance images and of the electrophysiological signals for in vivo exposed rat brain during the cortical spreading depression evoked by the topical application of KCl. Changes in the total hemoglobin concentration and the tissue oxygen saturation imply the temporary change in cerebral blood flow during CSD. Change in the reduced scattering coefficient was observed before the profound increase in the total hemoglobin concentration, and its occurrence was synchronized with the negative dc shift of the local field potential.

  17. A complete system for 3D reconstruction of roots for phenotypic analysis.

    PubMed

    Kumar, Pankaj; Cai, Jinhai; Miklavcic, Stanley J

    2015-01-01

    Here we present a complete system for 3D reconstruction of roots grown in a transparent gel medium or washed and suspended in water. The system is capable of being fully automated as it is self calibrating. The system starts with detection of root tips in root images from an image sequence generated by a turntable motion. Root tips are detected using the statistics of Zernike moments on image patches centred on high curvature points on root boundary and Bayes classification rule. The detected root tips are tracked in the image sequence using a multi-target tracking algorithm. Conics are fitted to the root tip trajectories using a novel ellipse fitting algorithm which weighs the data points by its eccentricity. The conics projected from the circular trajectory have a complex conjugate intersection which are image of the circular points. Circular points constraint the image of the absolute conics which are directly related to the internal parameters of the camera. The pose of the camera is computed from the image of the rotation axis and the horizon. The silhouettes of the roots and camera parameters are used to reconstruction the 3D voxel model of the roots. We show the results of real 3D reconstruction of roots which are detailed and realistic for phenotypic analysis.

  18. Improved quantitation and reproducibility in multi-PET/CT lung studies by combining CT information.

    PubMed

    Holman, Beverley F; Cuplov, Vesna; Millner, Lynn; Endozo, Raymond; Maher, Toby M; Groves, Ashley M; Hutton, Brian F; Thielemans, Kris

    2018-06-05

    Matched attenuation maps are vital for obtaining accurate and reproducible kinetic and static parameter estimates from PET data. With increased interest in PET/CT imaging of diffuse lung diseases for assessing disease progression and treatment effectiveness, understanding the extent of the effect of respiratory motion and establishing methods for correction are becoming more important. In a previous study, we have shown that using the wrong attenuation map leads to large errors due to density mismatches in the lung, especially in dynamic PET scans. Here, we extend this work to the case where the study is sub-divided into several scans, e.g. for patient comfort, each with its own CT (cine-CT and 'snap shot' CT). A method to combine multi-CT information into a combined-CT has then been developed, which averages the CT information from each study section to produce composite CT images with the lung density more representative of that in the PET data. This combined-CT was applied to nine patients with idiopathic pulmonary fibrosis, imaged with dynamic 18 F-FDG PET/CT to determine the improvement in the precision of the parameter estimates. Using XCAT simulations, errors in the influx rate constant were found to be as high as 60% in multi-PET/CT studies. Analysis of patient data identified displacements between study sections in the time activity curves, which led to an average standard error in the estimates of the influx rate constant of 53% with conventional methods. This reduced to within 5% after use of combined-CTs for attenuation correction of the study sections. Use of combined-CTs to reconstruct the sections of a multi-PET/CT study, as opposed to using the individually acquired CTs at each study stage, produces more precise parameter estimates and may improve discrimination between diseased and normal lung.

  19. Parameterization of Shape and Compactness in Object-based Image Classification Using Quickbird-2 Imagery

    NASA Astrophysics Data System (ADS)

    Tonbul, H.; Kavzoglu, T.

    2016-12-01

    In recent years, object based image analysis (OBIA) has spread out and become a widely accepted technique for the analysis of remotely sensed data. OBIA deals with grouping pixels into homogenous objects based on spectral, spatial and textural features of contiguous pixels in an image. The first stage of OBIA, named as image segmentation, is the most prominent part of object recognition. In this study, multiresolution segmentation, which is a region-based approach, was employed to construct image objects. In the application of multi-resolution, three parameters, namely shape, compactness and scale must be set by the analyst. Segmentation quality remarkably influences the fidelity of the thematic maps and accordingly the classification accuracy. Therefore, it is of great importance to search and set optimal values for the segmentation parameters. In the literature, main focus has been on the definition of scale parameter, assuming that the effect of shape and compactness parameters is limited in terms of achieved classification accuracy. The aim of this study is to deeply analyze the influence of shape/compactness parameters by varying their values while using the optimal scale parameter determined by the use of Estimation of Scale Parameter (ESP-2) approach. A pansharpened Qickbird-2 image covering Trabzon, Turkey was employed to investigate the objectives of the study. For this purpose, six different combinations of shape/compactness were utilized to make deductions on the behavior of shape and compactness parameters and optimal setting for all parameters as a whole. Objects were assigned to classes using nearest neighbor classifier in all segmentation observations and equal number of pixels was randomly selected to calculate accuracy metrics. The highest overall accuracy (92.3%) was achieved by setting the shape/compactness criteria to 0.3/0.3. The results of this study indicate that shape/compactness parameters can have significant effect on classification accuracy with 4% change in overall accuracy. Also, statistical significance of differences in accuracy was tested using the McNemar's test and found that the difference between poor and optimal setting of shape/compactness parameters was statistically significant, suggesting a search for optimal parameterization instead of default setting.

  20. An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.

    PubMed

    Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero

    2017-04-01

    The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.

  1. Crop Identification Using Time Series of Landsat-8 and Radarsat-2 Images: Application in a Groundwater Irrigated Region, South India

    NASA Astrophysics Data System (ADS)

    Sharma, A. K.; Hubert-Moy, L.; Betbederet, J.; Ruiz, L.; Sekhar, M.; Corgne, S.

    2016-08-01

    Monitoring land use and land cover and more particularly irrigated cropland dynamics is of great importance for water resources management and land use planning. The objective of this study was to evaluate the combined use of multi-temporal optical and radar data with a high spatial resolution in order to improve the precision of irrigated crop identification by taking into account information on crop phenological stages. SAR and optical parameters were derived from time- series of seven quad-pol RADARSAT-2 and four Landsat-8 images which were acquired on the Berambadi catchment, South India, during the monsoon crop season at the growth stages of turmeric crop. To select the best parameter to discriminate turmeric crops, an analysis of covariance (ANCOVA) was applied on all the time-series parameters and the most discriminant ones were classified using the Support Vector Machine (SVM) technique. Results show that in absence of optical images, polarimetric parameters derived from SAR time-series can be used for the turmeric area estimates and that the combined use of SAR and optical parameters can improve the classification accuracy to identify turmeric.

  2. A Study on the Assessment of Multi-Factors Affecting Urban Floods Using Satellite Image: A Case Study in Nakdong Basin, S. Korea

    NASA Astrophysics Data System (ADS)

    Kwak, Youngjoo; Kondoh, Akihiko

    2010-05-01

    Floods are also related to the changes in social economic conditions and land use. Recently, floods increased due to rapid urbanization and human activity in the lowland. Therefore, integrated management of total basin system is necessary to get the secure society. Typhoon ‘Rusa’ swept through eastern and southern parts of South Korea in the 2002. This pity experience gave us valuable knowledge that could be used to mitigate the future flood hazards. The purpose of this study is to construct the digital maps of the multi-factors related to urban flood concerning geomorphologic characteristics, land cover, and surface wetness. Parameters particularly consider geomorphologic functional unit, geomorphologic parameters derived from DEM (digital elevation model), and land use. The research area is Nakdong River Basin in S. Korea. As a result of preliminary analysis for Pusan area, the vulnerability map and the flood-prone areas can be extracted by applying spatial analysis on GIS (geographic information system).

  3. Multi-slice ultrasound image calibration of an intelligent skin-marker for soft tissue artefact compensation.

    PubMed

    Masum, M A; Pickering, M R; Lambert, A J; Scarvell, J M; Smith, P N

    2017-09-06

    In this paper, a novel multi-slice ultrasound (US) image calibration of an intelligent skin-marker used for soft tissue artefact compensation is proposed to align and orient image slices in an exact H-shaped pattern. Multi-slice calibration is complex, however, in the proposed method, a phantom based visual alignment followed by transform parameters estimation greatly reduces the complexity and provides sufficient accuracy. In this approach, the Hough Transform (HT) is used to further enhance the image features which originate from the image feature enhancing elements integrated into the physical phantom model, thus reducing feature detection uncertainty. In this framework, slice by slice image alignment and calibration are carried out and this provides manual ease and convenience. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Quantitative reconstructions in multi-modal photoacoustic and optical coherence tomography imaging

    NASA Astrophysics Data System (ADS)

    Elbau, P.; Mindrinos, L.; Scherzer, O.

    2018-01-01

    In this paper we perform quantitative reconstruction of the electric susceptibility and the Grüneisen parameter of a non-magnetic linear dielectric medium using measurement of a multi-modal photoacoustic and optical coherence tomography system. We consider the mathematical model presented in Elbau et al (2015 Handbook of Mathematical Methods in Imaging ed O Scherzer (New York: Springer) pp 1169-204), where a Fredholm integral equation of the first kind for the Grüneisen parameter was derived. For the numerical solution of the integral equation we consider a Galerkin type method.

  5. Photofragment image analysis using the Onion-Peeling Algorithm

    NASA Astrophysics Data System (ADS)

    Manzhos, Sergei; Loock, Hans-Peter

    2003-07-01

    With the growing popularity of the velocity map imaging technique, a need for the analysis of photoion and photoelectron images arose. Here, a computer program is presented that allows for the analysis of cylindrically symmetric images. It permits the inversion of the projection of the 3D charged particle distribution using the Onion Peeling Algorithm. Further analysis includes the determination of radial and angular distributions, from which velocity distributions and spatial anisotropy parameters are obtained. Identification and quantification of the different photolysis channels is therefore straightforward. In addition, the program features geometry correction, centering, and multi-Gaussian fitting routines, as well as a user-friendly graphical interface and the possibility of generating synthetic images using either the fitted or user-defined parameters. Program summaryTitle of program: Glass Onion Catalogue identifier: ADRY Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computer: IBM PC Operating system under which the program has been tested: Windows 98, Windows 2000, Windows NT Programming language used: Delphi 4.0 Memory required to execute with typical data: 18 Mwords No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 9 911 434 Distribution format: zip file Keywords: Photofragment image, onion peeling, anisotropy parameters Nature of physical problem: Information about velocity and angular distributions of photofragments is the basis on which the analysis of the photolysis process resides. Reconstructing the three-dimensional distribution from the photofragment image is the first step, further processing involving angular and radial integration of the inverted image to obtain velocity and angular distributions. Provisions have to be made to correct for slight distortions of the image, and to verify the accuracy of the analysis process. Method of solution: The "Onion Peeling" algorithm described by Helm [Rev. Sci. Instrum. 67 (6) (1996)] is used to perform the image reconstruction. Angular integration with a subsequent multi-Gaussian fit supplies information about the velocity distribution of the photofragments, whereas radial integration with subsequent expansion of the angular distributions over Legendre Polynomials gives the spatial anisotropy parameters. Fitting algorithms have been developed to centre the image and to correct for image distortion. Restrictions on the complexity of the problem: The maximum image size (1280×1280) and resolution (16 bit) are restricted by available memory and can be changed in the source code. Initial centre coordinates within 5 pixels may be required for the correction and the centering algorithm to converge. Peaks on the velocity profile separated by less then the peak width may not be deconvolved. In the charged particle image reconstruction, it is assumed that the kinetic energy released in the dissociation process is small compared to the energy acquired in the electric field. For the fitting parameters to be physically meaningful, cylindrical symmetry of the image has to be assumed but the actual inversion algorithm is stable to distortions of such symmetry in experimental images. Typical running time: The analysis procedure can be divided into three parts: inversion, fitting, and geometry correction. The inversion time grows approx. as R3, where R is the radius of the region of interest: for R=200 pixels it is less than a minute, for R=400 pixels less then 6 min on a 400 MHz IBM personal computer. The time for the velocity fitting procedure to converge depends strongly on the number of peaks in the velocity profile and the convergence criterion. It ranges between less then a second for simple curves and a few minutes for profiles with up to twenty peaks. The time taken for the image correction scales as R2 and depends on the curve profile. It is on the order of a few minutes for images with R=500 pixels. Unusual features of the program: Our centering and image correction algorithm is based on Fourier analysis of the radial distribution to insure the sharpest velocity profile and is insensitive to an uneven intensity distribution. There exists an angular averaging option to stabilize the inversion algorithm and not to loose the resolution at the same time.

  6. 3D reconstruction from multi-view VHR-satellite images in MicMac

    NASA Astrophysics Data System (ADS)

    Rupnik, Ewelina; Pierrot-Deseilligny, Marc; Delorme, Arthur

    2018-05-01

    This work addresses the generation of high quality digital surface models by fusing multiple depths maps calculated with the dense image matching method. The algorithm is adapted to very high resolution multi-view satellite images, and the main contributions of this work are in the multi-view fusion. The algorithm is insensitive to outliers, takes into account the matching quality indicators, handles non-correlated zones (e.g. occlusions), and is solved with a multi-directional dynamic programming approach. No geometric constraints (e.g. surface planarity) or auxiliary data in form of ground control points are required for its operation. Prior to the fusion procedures, the RPC geolocation parameters of all images are improved in a bundle block adjustment routine. The performance of the algorithm is evaluated on two VHR (Very High Resolution)-satellite image datasets (Pléiades, WorldView-3) revealing its good performance in reconstructing non-textured areas, repetitive patterns, and surface discontinuities.

  7. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  8. Analysis of signal-dependent sensor noise on JPEG 2000-compressed Sentinel-2 multi-spectral images

    NASA Astrophysics Data System (ADS)

    Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.

    2017-10-01

    The processing chain of Sentinel-2 MultiSpectral Instrument (MSI) data involves filtering and compression stages that modify MSI sensor noise. As a result, noise in Sentinel-2 Level-1C data distributed to users becomes processed. We demonstrate that processed noise variance model is bivariate: noise variance depends on image intensity (caused by signal-dependency of photon counting detectors) and signal-to-noise ratio (SNR; caused by filtering/compression). To provide information on processed noise parameters, which is missing in Sentinel-2 metadata, we propose to use blind noise parameter estimation approach. Existing methods are restricted to univariate noise model. Therefore, we propose extension of existing vcNI+fBm blind noise parameter estimation method to multivariate noise model, mvcNI+fBm, and apply it to each band of Sentinel-2A data. Obtained results clearly demonstrate that noise variance is affected by filtering/compression for SNR less than about 15. Processed noise variance is reduced by a factor of 2 - 5 in homogeneous areas as compared to noise variance for high SNR values. Estimate of noise variance model parameters are provided for each Sentinel-2A band. Sentinel-2A MSI Level-1C noise models obtained in this paper could be useful for end users and researchers working in a variety of remote sensing applications.

  9. Influence of additive laser manufacturing parameters on surface using density of partially melted particles

    NASA Astrophysics Data System (ADS)

    Rosa, Benoit; Brient, Antoine; Samper, Serge; Hascoët, Jean-Yves

    2016-12-01

    Mastering the additive laser manufacturing surface is a real challenge and would allow functional surfaces to be obtained without finishing. Direct Metal Deposition (DMD) surfaces are composed by directional and chaotic textures that are directly linked to the process principles. The aim of this work is to obtain surface topographies by mastering the operating process parameters. Based on experimental investigation, the influence of operating parameters on the surface finish has been modeled. Topography parameters and multi-scale analysis have been used in order to characterize the DMD obtained surfaces. This study also proposes a methodology to characterize DMD chaotic texture through topography filtering and 3D image treatment. In parallel, a new parameter is proposed: density of particles (D p). Finally, this study proposes a regression modeling between process parameters and density of particles parameter.

  10. Radar Imaging of Non-Uniformly Rotating Targets via a Novel Approach for Multi-Component AM-FM Signal Parameter Estimation

    PubMed Central

    Wang, Yong

    2015-01-01

    A novel radar imaging approach for non-uniformly rotating targets is proposed in this study. It is assumed that the maneuverability of the non-cooperative target is severe, and the received signal in a range cell can be modeled as multi-component amplitude-modulated and frequency-modulated (AM-FM) signals after motion compensation. Then, the modified version of Chirplet decomposition (MCD) based on the integrated high order ambiguity function (IHAF) is presented for the parameter estimation of AM-FM signals, and the corresponding high quality instantaneous ISAR images can be obtained from the estimated parameters. Compared with the MCD algorithm based on the generalized cubic phase function (GCPF) in the authors’ previous paper, the novel algorithm presented in this paper is more accurate and efficient, and the results with simulated and real data demonstrate the superiority of the proposed method. PMID:25806870

  11. Methodology for quantitative rapid multi-tracer PET tumor characterizations.

    PubMed

    Kadrmas, Dan J; Hoffman, John M

    2013-10-04

    Positron emission tomography (PET) can image a wide variety of functional and physiological parameters in vivo using different radiotracers. As more is learned about the molecular basis for disease and treatment, the potential value of molecular imaging for characterizing and monitoring disease status has increased. Characterizing multiple aspects of tumor physiology by imaging multiple PET tracers in a single patient provides additional complementary information, and there is a significant body of literature supporting the potential value of multi-tracer PET imaging in oncology. However, imaging multiple PET tracers in a single patient presents a number of challenges. A number of techniques are under development for rapidly imaging multiple PET tracers in a single scan, where signal-recovery processing algorithms are employed to recover various imaging endpoints for each tracer. Dynamic imaging is generally used with tracer injections staggered in time, and kinetic constraints are utilized to estimate each tracers' contribution to the multi-tracer imaging signal. This article summarizes past and ongoing work in multi-tracer PET tumor imaging, and then organizes and describes the main algorithmic approaches for achieving multi-tracer PET signal-recovery. While significant advances have been made, the complexity of the approach necessitates protocol design, optimization, and testing for each particular tracer combination and application. Rapid multi-tracer PET techniques have great potential for both research and clinical cancer imaging applications, and continued research in this area is warranted.

  12. Methodology for Quantitative Rapid Multi-Tracer PET Tumor Characterizations

    PubMed Central

    Kadrmas, Dan J.; Hoffman, John M.

    2013-01-01

    Positron emission tomography (PET) can image a wide variety of functional and physiological parameters in vivo using different radiotracers. As more is learned about the molecular basis for disease and treatment, the potential value of molecular imaging for characterizing and monitoring disease status has increased. Characterizing multiple aspects of tumor physiology by imaging multiple PET tracers in a single patient provides additional complementary information, and there is a significant body of literature supporting the potential value of multi-tracer PET imaging in oncology. However, imaging multiple PET tracers in a single patient presents a number of challenges. A number of techniques are under development for rapidly imaging multiple PET tracers in a single scan, where signal-recovery processing algorithms are employed to recover various imaging endpoints for each tracer. Dynamic imaging is generally used with tracer injections staggered in time, and kinetic constraints are utilized to estimate each tracers' contribution to the multi-tracer imaging signal. This article summarizes past and ongoing work in multi-tracer PET tumor imaging, and then organizes and describes the main algorithmic approaches for achieving multi-tracer PET signal-recovery. While significant advances have been made, the complexity of the approach necessitates protocol design, optimization, and testing for each particular tracer combination and application. Rapid multi-tracer PET techniques have great potential for both research and clinical cancer imaging applications, and continued research in this area is warranted. PMID:24312149

  13. Fast automated analysis of strong gravitational lenses with convolutional neural networks.

    PubMed

    Hezaveh, Yashar D; Levasseur, Laurence Perreault; Marshall, Philip J

    2017-08-30

    Quantifying image distortions caused by strong gravitational lensing-the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures-and estimating the corresponding matter distribution of these structures (the 'gravitational lens') has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the 'singular isothermal ellipsoid' density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.

  14. Automatic Registration of GF4 Pms: a High Resolution Multi-Spectral Sensor on Board a Satellite on Geostationary Orbit

    NASA Astrophysics Data System (ADS)

    Gao, M.; Li, J.

    2018-04-01

    Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.

  15. SU-G-IeP4-13: PET Image Noise Variability and Its Consequences for Quantifying Tumor Hypoxia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kueng, R; Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario; Manser, P

    Purpose: The values in a PET image which represent activity concentrations of a radioactive tracer are influenced by a large number of parameters including patient conditions as well as image acquisition and reconstruction. This work investigates noise characteristics in PET images for various image acquisition and image reconstruction parameters. Methods: Different phantoms with homogeneous activity distributions were scanned using several acquisition parameters and reconstructed with numerous sets of reconstruction parameters. Images from six PET scanners from different vendors were analyzed and compared with respect to quantitative noise characteristics. Local noise metrics, which give rise to a threshold value defining themore » metric of hypoxic fraction, as well as global noise measures in terms of noise power spectra (NPS) were computed. In addition to variability due to different reconstruction parameters, spatial variability of activity distribution and its noise metrics were investigated. Patient data from clinical trials were mapped onto phantom scans to explore the impact of the scanner’s intrinsic noise variability on quantitative clinical analysis. Results: Local noise metrics showed substantial variability up to an order of magnitude for different reconstruction parameters. Investigations of corresponding NPS revealed reconstruction dependent structural noise characteristics. For the acquisition parameters, noise metrics were guided by Poisson statistics. Large spatial non-uniformity of the noise was observed in both axial and radial direction of a PET image. In addition, activity concentrations in PET images of homogeneous phantom scans showed intriguing spatial fluctuations for most scanners. The clinical metric of the hypoxic fraction was shown to be considerably influenced by the PET scanner’s spatial noise characteristics. Conclusion: We showed that a hypoxic fraction metric based on noise characteristics requires careful consideration of the various dependencies in order to justify its quantitative validity. This work may result in recommendations for harmonizing QA of PET imaging for multi-institutional clinical trials.« less

  16. Analysing and correcting the differences between multi-source and multi-scale spatial remote sensing observations.

    PubMed

    Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun

    2014-01-01

    Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation.

  17. Analysing and Correcting the Differences between Multi-Source and Multi-Scale Spatial Remote Sensing Observations

    PubMed Central

    Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun

    2014-01-01

    Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation. PMID:25405760

  18. Segment and Fit Thresholding: A New Method for Image Analysis Applied to Microarray and Immunofluorescence Data

    PubMed Central

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.

    2016-01-01

    Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978

  19. Statistical theory and applications of lock-in carrierographic image pixel brightness dependence on multi-crystalline Si solar cell efficiency and photovoltage

    NASA Astrophysics Data System (ADS)

    Mandelis, Andreas; Zhang, Yu; Melnikov, Alexander

    2012-09-01

    A solar cell lock-in carrierographic image generation theory based on the concept of non-equilibrium radiation chemical potential was developed. An optoelectronic diode expression was derived linking the emitted radiative recombination photon flux (current density), the solar conversion efficiency, and the external load resistance via the closed- and/or open-circuit photovoltage. The expression was shown to be of a structure similar to the conventional electrical photovoltaic I-V equation, thereby allowing the carrierographic image to be used in a quantitative statistical pixel brightness distribution analysis with outcome being the non-contacting measurement of mean values of these important parameters averaged over the entire illuminated solar cell surface. This is the optoelectronic equivalent of the electrical (contacting) measurement method using an external resistor circuit and the outputs of the solar cell electrode grid, the latter acting as an averaging distribution network over the surface. The statistical theory was confirmed using multi-crystalline Si solar cells.

  20. GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.

    PubMed

    Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A

    2017-03-01

    We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.

  1. Development of a generalized algorithm of satellite remote sensing using multi-wavelength and multi-pixel information (MWP method) for aerosol properties by satellite-borne imager

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Nakajima, T.; Morimoto, S.; Takenaka, H.

    2014-12-01

    We have developed a new satellite remote sensing algorithm to retrieve the aerosol optical characteristics using multi-wavelength and multi-pixel information of satellite imagers (MWP method). In this algorithm, the inversion method is a combination of maximum a posteriori (MAP) method (Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, with the progress of computing technique, this method has being combined with the direct radiation transfer calculation numerically solved by each iteration step of the non-linear inverse problem, without using LUT (Look Up Table) with several constraints.Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine and coarse mode particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area.We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. The result of the experiment showed that AOTs of fine mode and coarse mode, soot fraction and ground surface albedo are successfully retrieved within expected accuracy. We discuss the accuracy of the algorithm for various land surface types. Then, we applied this algorithm to GOSAT/CAI imager data, and we compared retrieved and surface-observed AOTs at the CAI pixel closest to an AERONET (Aerosol Robotic Network) or SKYNET site in each region. Comparison at several sites in urban area indicated that AOTs retrieved by our method are in agreement with surface-observed AOT within ±0.066.Our future work is to extend the algorithm for analysis of AGEOS-II/GLI and GCOM/C-SGLI data.

  2. Study on polarization image methods in turbid medium

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong

    2014-11-01

    Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.

  3. FPGA-based multi-channel fluorescence lifetime analysis of Fourier multiplexed frequency-sweeping lifetime imaging

    PubMed Central

    Zhao, Ming; Li, Yu; Peng, Leilei

    2014-01-01

    We report a fast non-iterative lifetime data analysis method for the Fourier multiplexed frequency-sweeping confocal FLIM (Fm-FLIM) system [ Opt. Express22, 10221 ( 2014)24921725]. The new method, named R-method, allows fast multi-channel lifetime image analysis in the system’s FPGA data processing board. Experimental tests proved that the performance of the R-method is equivalent to that of single-exponential iterative fitting, and its sensitivity is well suited for time-lapse FLIM-FRET imaging of live cells, for example cyclic adenosine monophosphate (cAMP) level imaging with GFP-Epac-mCherry sensors. With the R-method and its FPGA implementation, multi-channel lifetime images can now be generated in real time on the multi-channel frequency-sweeping FLIM system, and live readout of FRET sensors can be performed during time-lapse imaging. PMID:25321778

  4. MATtrack: A MATLAB-Based Quantitative Image Analysis Platform for Investigating Real-Time Photo-Converted Fluorescent Signals in Live Cells.

    PubMed

    Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W

    2015-01-01

    We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.

  5. MATtrack: A MATLAB-Based Quantitative Image Analysis Platform for Investigating Real-Time Photo-Converted Fluorescent Signals in Live Cells

    PubMed Central

    Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.

    2015-01-01

    We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569

  6. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  7. Wrinkle and roughness measurement by the Antera 3D and its application for evaluation of cosmetic products.

    PubMed

    Messaraa, C; Metois, A; Walsh, M; Hurley, S; Doyle, L; Mansfield, A; O'Connor, C; Mavon, A

    2018-01-24

    Skin topographic measurements are of paramount importance in the field of dermo-cosmetic evaluation. The aim of this study was to investigate how the Antera 3D, a multi-purpose handheld camera, correlates with other topographic techniques and changes in skin topography following the use of a cosmetic product. Skin topographic measurements were collected on 26 female volunteers aged 45-70 years with the Antera 3D, the DermaTOP and image analysis on parallel-polarized pictures. Different filters for analysis from the Antera 3D were investigated for repeatability, correlations with other imaging techniques and ability to detect improvements of skin topography following application of a serum. Most of Antera 3D parameters were found to be strongly correlated with the DermaTOP parameters. No association was found between the Antera 3D parameters and measurements on parallel-polarized photographs. The measurements repeatability was comparable among the different filters for analysis, with the exception of wrinkle max depth and roughness Rt. Following a single application of a tightening serum, both Antera 3D wrinkles and texture parameters were able to record significant improvements, with the best improvements observed with the large filter. The Antera 3D demonstrated its relevance for cosmetic product evaluation. We also provide recommendations for the analysis based on our findings. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas.

    PubMed

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-11-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red-green-blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%-50% and 70%-40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE).

  9. An Analysis of the Influence of Flight Parameters in the Generation of Unmanned Aerial Vehicle (UAV) Orthomosaicks to Survey Archaeological Areas

    PubMed Central

    Mesas-Carrascosa, Francisco-Javier; Notario García, María Dolores; Meroño de Larriva, Jose Emilio; García-Ferrer, Alfonso

    2016-01-01

    This article describes the configuration and technical specifications of a multi-rotor unmanned aerial vehicle (UAV) using a red–green–blue (RGB) sensor for the acquisition of images needed for the production of orthomosaics to be used in archaeological applications. Several flight missions were programmed as follows: flight altitudes at 30, 40, 50, 60, 70 and 80 m above ground level; two forward and side overlap settings (80%–50% and 70%–40%); and the use, or lack thereof, of ground control points. These settings were chosen to analyze their influence on the spatial quality of orthomosaicked images processed by Inpho UASMaster (Trimble, CA, USA). Changes in illumination over the study area, its impact on flight duration, and how it relates to these settings is also considered. The combined effect of these parameters on spatial quality is presented as well, defining a ratio between ground sample distance of UAV images and expected root mean square of a UAV orthomosaick. The results indicate that a balance between all the proposed parameters is useful for optimizing mission planning and image processing, altitude above ground level (AGL) being main parameter because of its influence on root mean square error (RMSE). PMID:27809293

  10. Multi-Temporal Analysis of Landsat Imagery for Bathymetry.

    DTIC Science & Technology

    1983-05-01

    this data set, typical results obtained when these data were used to implement proposed procedures, an interpretation of these analyses, and based...warping, etc.) have been carried out * as described in section 3.4 and the DIPS operator manuals . For each date * the best available parameter...1982. 5. Digital Image Processing System User’s Manual DBA Systems, Inc., Under Contract DMA800-78-C-0101, 8 November 1979. 6. Naylor, L.D. Status of

  11. Automated retinal vessel type classification in color fundus images

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  12. Sub-pattern based multi-manifold discriminant analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen

    2018-04-01

    In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.

  13. Application of dragonfly algorithm for optimal performance analysis of process parameters in turn-mill operations- A case study

    NASA Astrophysics Data System (ADS)

    Vikram, K. Arun; Ratnam, Ch; Lakshmi, VVK; Kumar, A. Sunny; Ramakanth, RT

    2018-02-01

    Meta-heuristic multi-response optimization methods are widely in use to solve multi-objective problems to obtain Pareto optimal solutions during optimization. This work focuses on optimal multi-response evaluation of process parameters in generating responses like surface roughness (Ra), surface hardness (H) and tool vibration displacement amplitude (Vib) while performing operations like tangential and orthogonal turn-mill processes on A-axis Computer Numerical Control vertical milling center. Process parameters like tool speed, feed rate and depth of cut are considered as process parameters machined over brass material under dry condition with high speed steel end milling cutters using Taguchi design of experiments (DOE). Meta-heuristic like Dragonfly algorithm is used to optimize the multi-objectives like ‘Ra’, ‘H’ and ‘Vib’ to identify the optimal multi-response process parameters combination. Later, the results thus obtained from multi-objective dragonfly algorithm (MODA) are compared with another multi-response optimization technique Viz. Grey relational analysis (GRA).

  14. Dynamic whole body PET parametric imaging: II. Task-oriented statistical estimation

    PubMed Central

    Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman

    2013-01-01

    In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15–20cm) of a single bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical FDG patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection. PMID:24080994

  15. Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation.

    PubMed

    Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman

    2013-10-21

    In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical (18)F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.

  16. Multispectral Snapshot Imagers Onboard Small Satellite Formations for Multi-Angular Remote Sensing

    NASA Technical Reports Server (NTRS)

    Nag, Sreeja; Hewagama, Tilak; Georgiev, Georgi; Pasquale, Bert; Aslam, Shahid; Gatebe, Charles K.

    2017-01-01

    Multispectral snapshot imagers are capable of producing 2D spatial images with a single exposure at selected, numerous wavelengths using the same camera, therefore operate differently from push broom or whiskbroom imagers. They are payloads of choice in multi-angular, multi-spectral imaging missions that use small satellites flying in controlled formation, to retrieve Earth science measurements dependent on the targets Bidirectional Reflectance-Distribution Function (BRDF). Narrow fields of view are needed to capture images with moderate spatial resolution. This paper quantifies the dependencies of the imagers optical system, spectral elements and camera on the requirements of the formation mission and their impact on performance metrics such as spectral range, swath and signal to noise ratio (SNR). All variables and metrics have been generated from a comprehensive, payload design tool. The baseline optical parameters selected (diameter 7 cm, focal length 10.5 cm, pixel size 20 micron, field of view 1.15 deg) and snapshot imaging technologies are available. The spectral components shortlisted were waveguide spectrometers, acousto-optic tunable filters (AOTF), electronically actuated Fabry-Perot interferometers, and integral field spectrographs. Qualitative evaluation favored AOTFs because of their low weight, small size, and flight heritage. Quantitative analysis showed that waveguide spectrometers perform better in terms of achievable swath (10-90 km) and SNR (greater than 20) for 86 wavebands, but the data volume generated will need very high bandwidth communication to downlink. AOTFs meet the external data volume caps well as the minimum spectral (wavebands) and radiometric (SNR) requirements, therefore are found to be currently feasible in spite of lower swath and SNR.

  17. Automated retrieval of forest structure variables based on multi-scale texture analysis of VHR satellite imagery

    NASA Astrophysics Data System (ADS)

    Beguet, Benoit; Guyon, Dominique; Boukir, Samia; Chehata, Nesrine

    2014-10-01

    The main goal of this study is to design a method to describe the structure of forest stands from Very High Resolution satellite imagery, relying on some typical variables such as crown diameter, tree height, trunk diameter, tree density and tree spacing. The emphasis is placed on the automatization of the process of identification of the most relevant image features for the forest structure retrieval task, exploiting both spectral and spatial information. Our approach is based on linear regressions between the forest structure variables to be estimated and various spectral and Haralick's texture features. The main drawback of this well-known texture representation is the underlying parameters which are extremely difficult to set due to the spatial complexity of the forest structure. To tackle this major issue, an automated feature selection process is proposed which is based on statistical modeling, exploring a wide range of parameter values. It provides texture measures of diverse spatial parameters hence implicitly inducing a multi-scale texture analysis. A new feature selection technique, we called Random PRiF, is proposed. It relies on random sampling in feature space, carefully addresses the multicollinearity issue in multiple-linear regression while ensuring accurate prediction of forest variables. Our automated forest variable estimation scheme was tested on Quickbird and Pléiades panchromatic and multispectral images, acquired at different periods on the maritime pine stands of two sites in South-Western France. It outperforms two well-established variable subset selection techniques. It has been successfully applied to identify the best texture features in modeling the five considered forest structure variables. The RMSE of all predicted forest variables is improved by combining multispectral and panchromatic texture features, with various parameterizations, highlighting the potential of a multi-resolution approach for retrieving forest structure variables from VHR satellite images. Thus an average prediction error of ˜ 1.1 m is expected on crown diameter, ˜ 0.9 m on tree spacing, ˜ 3 m on height and ˜ 0.06 m on diameter at breast height.

  18. Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia

    DTIC Science & Technology

    2015-10-01

    eyes and image choroidal vessels/capillaries using CARS intravital microscopy Subtask 3: Measure oxy-hemoglobin levels in PBI test and control eyes...AWARD NUMBER: W81XWH-14-1-0537 TITLE: Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia...4. TITLE AND SUBTITLE Mobile, Multimodal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia 5a. CONTRACT NUMBER W81XWH

  19. Simultaneous measurement and modulation of multiple physiological parameters in the isolated heart using optical techniques

    PubMed Central

    Lee, Peter; Yan, Ping; Ewart, Paul; Kohl, Peter

    2012-01-01

    Whole-heart multi-parametric optical mapping has provided valuable insight into the interplay of electro-physiological parameters, and this technology will continue to thrive as dyes are improved and technical solutions for imaging become simpler and cheaper. Here, we show the advantage of using improved 2nd-generation voltage dyes, provide a simple solution to panoramic multi-parametric mapping, and illustrate the application of flash photolysis of caged compounds for studies in the whole heart. For proof of principle, we used the isolated rat whole-heart model. After characterising the blue and green isosbestic points of di-4-ANBDQBS and di-4-ANBDQPQ, respectively, two voltage and calcium mapping systems are described. With two newly custom-made multi-band optical filters, (1) di-4-ANBDQBS and fluo-4 and (2) di-4-ANBDQPQ and rhod-2 mapping are demonstrated. Furthermore, we demonstrate three-parameter mapping using di-4-ANBDQPQ, rhod-2 and NADH. Using off-the-shelf optics and the di-4-ANBDQPQ and rhod-2 combination, we demonstrate panoramic multi-parametric mapping, affording a 360° spatiotemporal record of activity. Finally, local optical perturbation of calcium dynamics in the whole heart is demonstrated using the caged compound, o-nitrophenyl ethylene glycol tetraacetic acid (NP-EGTA), with an ultraviolet light-emitting diode (LED). Calcium maps (heart loaded with di-4-ANBDQPQ and rhod-2) demonstrate successful NP-EGTA loading and local flash photolysis. All imaging systems were built using only a single camera. In conclusion, using novel 2nd-generation voltage dyes, we developed scalable techniques for multi-parametric optical mapping of the whole heart from one point of view and panoramically. In addition to these parameter imaging approaches, we show that it is possible to use caged compounds and ultraviolet LEDs to locally perturb electrophysiological parameters in the whole heart. PMID:22886365

  20. Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2013-03-01

    Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.

  1. Semantic focusing allows fully automated single-layer slide scanning of cervical cytology slides.

    PubMed

    Lahrmann, Bernd; Valous, Nektarios A; Eisenmann, Urs; Wentzensen, Nicolas; Grabe, Niels

    2013-01-01

    Liquid-based cytology (LBC) in conjunction with Whole-Slide Imaging (WSI) enables the objective and sensitive and quantitative evaluation of biomarkers in cytology. However, the complex three-dimensional distribution of cells on LBC slides requires manual focusing, long scanning-times, and multi-layer scanning. Here, we present a solution that overcomes these limitations in two steps: first, we make sure that focus points are only set on cells. Secondly, we check the total slide focus quality. From a first analysis we detected that superficial dust can be separated from the cell layer (thin layer of cells on the glass slide) itself. Then we analyzed 2,295 individual focus points from 51 LBC slides stained for p16 and Ki67. Using the number of edges in a focus point image, specific color values and size-inclusion filters, focus points detecting cells could be distinguished from focus points on artifacts (accuracy 98.6%). Sharpness as total focus quality of a virtual LBC slide is computed from 5 sharpness features. We trained a multi-parameter SVM classifier on 1,600 images. On an independent validation set of 3,232 cell images we achieved an accuracy of 94.8% for classifying images as focused. Our results show that single-layer scanning of LBC slides is possible and how it can be achieved. We assembled focus point analysis and sharpness classification into a fully automatic, iterative workflow, free of user intervention, which performs repetitive slide scanning as necessary. On 400 LBC slides we achieved a scanning-time of 13.9±10.1 min with 29.1±15.5 focus points. In summary, the integration of semantic focus information into whole-slide imaging allows automatic high-quality imaging of LBC slides and subsequent biomarker analysis.

  2. Multi-parameter phenotypic profiling: using cellular effects to characterize small-molecule compounds.

    PubMed

    Feng, Yan; Mitchison, Timothy J; Bender, Andreas; Young, Daniel W; Tallarico, John A

    2009-07-01

    Multi-parameter phenotypic profiling of small molecules provides important insights into their mechanisms of action, as well as a systems level understanding of biological pathways and their responses to small molecule treatments. It therefore deserves more attention at an early step in the drug discovery pipeline. Here, we summarize the technologies that are currently in use for phenotypic profiling--including mRNA-, protein- and imaging-based multi-parameter profiling--in the drug discovery context. We think that an earlier integration of phenotypic profiling technologies, combined with effective experimental and in silico target identification approaches, can improve success rates of lead selection and optimization in the drug discovery process.

  3. Design and analysis of optical systems for the Stanford/MSFC Multi-Spectral Solar Telescope Array

    NASA Astrophysics Data System (ADS)

    Hadaway, James B.; Johnson, R. Barry; Hoover, Richard B.; Lindblom, Joakim F.; Walker, Arthur B. C., Jr.

    1989-07-01

    This paper reports on the design and the theoretical ray trace analysis of the optical systems which will comprise the primary imaging components for the Stanford/MSFC Multi-Spectral Solar Telescope Array (MSSTA). This instrument is being developed for ultra-high resolution investigations of the sun from a sounding rocket. Doubly reflecting systems of sphere-sphere, ellipsoid-sphere (Dall-Kirkham), paraboloid-hyperboloid (Cassegrain), and hyperboloid-hyperboloid (Ritchey-Chretien) configurations were analyzed. For these mirror systems, ray trace analysis was performed and through-focus spot diagrams, point spread function plots, and geometrical and diffraction MTFs were generated. The results of these studies are presented along with the parameters of the Ritchey-Chretien optical system selected for the MSSTA flight. The payload, which incorporates seven of these Ritchey-Chretien systems, is now being prepared for launch in late September 1989.

  4. Design and analysis of optical systems for the Stanford/MSFC Multi-Spectral Solar Telescope Array

    NASA Technical Reports Server (NTRS)

    Hadaway, James B.; Johnson, R. Barry; Hoover, Richard B.; Lindblom, Joakim F.; Walker, Arthur B. C., Jr.

    1989-01-01

    This paper reports on the design and the theoretical ray trace analysis of the optical systems which will comprise the primary imaging components for the Stanford/MSFC Multi-Spectral Solar Telescope Array (MSSTA). This instrument is being developed for ultra-high resolution investigations of the sun from a sounding rocket. Doubly reflecting systems of sphere-sphere, ellipsoid-sphere (Dall-Kirkham), paraboloid-hyperboloid (Cassegrain), and hyperboloid-hyperboloid (Ritchey-Chretien) configurations were analyzed. For these mirror systems, ray trace analysis was performed and through-focus spot diagrams, point spread function plots, and geometrical and diffraction MTFs were generated. The results of these studies are presented along with the parameters of the Ritchey-Chretien optical system selected for the MSSTA flight. The payload, which incorporates seven of these Ritchey-Chretien systems, is now being prepared for launch in late September 1989.

  5. Multi-temporal MRI carpal bone volumes analysis by principal axes registration

    NASA Astrophysics Data System (ADS)

    Ferretti, Roberta; Dellepiane, Silvana

    2016-03-01

    In this paper, a principal axes registration technique is presented, with the relevant application to segmented volumes. The purpose of the proposed registration is to compare multi-temporal volumes of carpal bones from Magnetic Resonance Imaging (MRI) acquisitions. Starting from the study of the second-order moment matrix, the eigenvectors are calculated to allow the rotation of volumes with respect to reference axes. Then the volumes are spatially translated to become perfectly overlapped. A quantitative evaluation of the results obtained is carried out by computing classical indices from the confusion matrix, which depict similarity measures between the volumes of the same organ as extracted from MRI acquisitions executed at different moments. Within the medical field, the way a registration can be used to compare multi-temporal images is of great interest, since it provides the physician with a tool which allows a visual monitoring of a disease evolution. The segmentation method used herein is based on the graph theory and is a robust, unsupervised and parameters independent method. Patients affected by rheumatic diseases have been considered.

  6. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less

  7. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    DOE PAGES

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    2017-08-30

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. Our procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physicalmore » processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. We report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.« less

  8. Fast automated analysis of strong gravitational lenses with convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Hezaveh, Yashar D.; Levasseur, Laurence Perreault; Marshall, Philip J.

    2017-08-01

    Quantifying image distortions caused by strong gravitational lensing—the formation of multiple images of distant sources due to the deflection of their light by the gravity of intervening structures—and estimating the corresponding matter distribution of these structures (the ‘gravitational lens’) has primarily been performed using maximum likelihood modelling of observations. This procedure is typically time- and resource-consuming, requiring sophisticated lensing codes, several data preparation steps, and finding the maximum likelihood model parameters in a computationally expensive process with downhill optimizers. Accurate analysis of a single gravitational lens can take up to a few weeks and requires expert knowledge of the physical processes and methods involved. Tens of thousands of new lenses are expected to be discovered with the upcoming generation of ground and space surveys. Here we report the use of deep convolutional neural networks to estimate lensing parameters in an extremely fast and automated way, circumventing the difficulties that are faced by maximum likelihood methods. We also show that the removal of lens light can be made fast and automated using independent component analysis of multi-filter imaging data. Our networks can recover the parameters of the ‘singular isothermal ellipsoid’ density profile, which is commonly used to model strong lensing systems, with an accuracy comparable to the uncertainties of sophisticated models but about ten million times faster: 100 systems in approximately one second on a single graphics processing unit. These networks can provide a way for non-experts to obtain estimates of lensing parameters for large samples of data.

  9. Peripheral Quantitative CT (pQCT) Using a Dedicated Extremity Cone-Beam CT Scanner

    PubMed Central

    Muhit, A. A.; Arora, S.; Ogawa, M.; Ding, Y.; Zbijewski, W.; Stayman, J. W.; Thawait, G.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Bingham, C.O.; Means, K.; Carrino, J. A.; Siewerdsen, J. H.

    2014-01-01

    Purpose We describe the initial assessment of the peripheral quantitative CT (pQCT) imaging capabilities of a cone-beam CT (CBCT) scanner dedicated to musculoskeletal extremity imaging. The aim is to accurately measure and quantify bone and joint morphology using information automatically acquired with each CBCT scan, thereby reducing the need for a separate pQCT exam. Methods A prototype CBCT scanner providing isotropic, sub-millimeter spatial resolution and soft-tissue contrast resolution comparable or superior to standard multi-detector CT (MDCT) has been developed for extremity imaging, including the capability for weight-bearing exams and multi-mode (radiography, fluoroscopy, and volumetric) imaging. Assessment of pQCT performance included measurement of bone mineral density (BMD), morphometric parameters of subchondral bone architecture, and joint space analysis. Measurements employed phantoms, cadavers, and patients from an ongoing pilot study imaged with the CBCT prototype (at various acquisition, calibration, and reconstruction techniques) in comparison to MDCT (using pQCT protocols for analysis of BMD) and micro-CT (for analysis of subchondral morphometry). Results The CBCT extremity scanner yielded BMD measurement within ±2–3% error in both phantom studies and cadaver extremity specimens. Subchondral bone architecture (bone volume fraction, trabecular thickness, degree of anisotropy, and structure model index) exhibited good correlation with gold standard micro-CT (error ~5%), surpassing the conventional limitations of spatial resolution in clinical MDCT scanners. Joint space analysis demonstrated the potential for sensitive 3D joint space mapping beyond that of qualitative radiographic scores in application to non-weight-bearing versus weight-bearing lower extremities and assessment of phalangeal joint space integrity in the upper extremities. Conclusion The CBCT extremity scanner demonstrated promising initial results in accurate pQCT analysis from images acquired with each CBCT scan. Future studies will include improved x-ray scatter correction and image reconstruction techniques to further improve accuracy and to correlate pQCT metrics with known pathology. PMID:25076823

  10. Energy Efficient Image/Video Data Transmission on Commercial Multi-Core Processors

    PubMed Central

    Lee, Sungju; Kim, Heegon; Chung, Yongwha; Park, Daihee

    2012-01-01

    In transmitting image/video data over Video Sensor Networks (VSNs), energy consumption must be minimized while maintaining high image/video quality. Although image/video compression is well known for its efficiency and usefulness in VSNs, the excessive costs associated with encoding computation and complexity still hinder its adoption for practical use. However, it is anticipated that high-performance handheld multi-core devices will be used as VSN processing nodes in the near future. In this paper, we propose a way to improve the energy efficiency of image and video compression with multi-core processors while maintaining the image/video quality. We improve the compression efficiency at the algorithmic level or derive the optimal parameters for the combination of a machine and compression based on the tradeoff between the energy consumption and the image/video quality. Based on experimental results, we confirm that the proposed approach can improve the energy efficiency of the straightforward approach by a factor of 2∼5 without compromising image/video quality. PMID:23202181

  11. Automated daily quality control analysis for mammography in a multi-unit imaging center.

    PubMed

    Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli

    2018-01-01

    Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.

  12. Can multi-slice or navigator-gated R2* MRI replace single-slice breath-hold acquisition for hepatic iron quantification?

    PubMed

    Loeffler, Ralf B; McCarville, M Beth; Wagstaff, Anne W; Smeltzer, Matthew P; Krafft, Axel J; Song, Ruitian; Hankins, Jane S; Hillenbrand, Claudia M

    2017-01-01

    Liver R2* values calculated from multi-gradient echo (mGRE) magnetic resonance images (MRI) are strongly correlated with hepatic iron concentration (HIC) as shown in several independently derived biopsy calibration studies. These calibrations were established for axial single-slice breath-hold imaging at the location of the portal vein. Scanning in multi-slice mode makes the exam more efficient, since whole-liver coverage can be achieved with two breath-holds and the optimal slice can be selected afterward. Navigator echoes remove the need for breath-holds and allow use in sedated patients. To evaluate if the existing biopsy calibrations can be applied to multi-slice and navigator-controlled mGRE imaging in children with hepatic iron overload, by testing if there is a bias-free correlation between single-slice R2* and multi-slice or multi-slice navigator controlled R2*. This study included MRI data from 71 patients with transfusional iron overload, who received an MRI exam to estimate HIC using gradient echo sequences. Patient scans contained 2 or 3 of the following imaging methods used for analysis: single-slice images (n = 71), multi-slice images (n = 69) and navigator-controlled images (n = 17). Small and large blood corrected region of interests were selected on axial images of the liver to obtain R2* values for all data sets. Bland-Altman and linear regression analysis were used to compare R2* values from single-slice images to those of multi-slice images and navigator-controlled images. Bland-Altman analysis showed that all imaging method comparisons were strongly associated with each other and had high correlation coefficients (0.98 ≤ r ≤ 1.00) with P-values ≤0.0001. Linear regression yielded slopes that were close to 1. We found that navigator-gated or breath-held multi-slice R2* MRI for HIC determination measures R2* values comparable to the biopsy-validated single-slice, single breath-hold scan. We conclude that these three R2* methods can be interchangeably used in existing R2*-HIC calibrations.

  13. Radiogenomic analysis of lower grade glioma: a pilot multi-institutional study shows an association between quantitative image features and tumor genomics

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Clark, Kal; Czarnek, Nicholas M.; Shamsesfandabadi, Parisa; Peters, Katherine B.; Saha, Ashirbani

    2017-03-01

    Recent studies showed that genomic analysis of lower grade gliomas can be very effective for stratification of patients into groups with different prognosis and proposed specific genomic classifications. In this study, we explore the association of one of those genomic classifications with imaging parameters to determine whether imaging could serve a similar role to genomics in cancer patient treatment. Specifically, we analyzed imaging and genomics data for 110 patients from 5 institutions from The Cancer Genome Atlas and The Cancer Imaging Archive datasets. The analyzed imaging data contained preoperative FLAIR sequence for each patient. The images were analyzed using the in-house algorithms which quantify 2D and 3D aspects of the tumor shape. Genomic data consisted of a cluster of clusters classification proposed in a very recent and leading publication in the field of lower grade glioma genomics. Our statistical analysis showed that there is a strong association between the tumor cluster-of-clusters subtype and two imaging features: bounding ellipsoid volume ratio and angular standard deviation. This result shows high promise for the potential use of imaging as a surrogate measure for genomics in the decision process regarding treatment of lower grade glioma patients.

  14. Atmospheric correction for remote sensing image based on multi-spectral information

    NASA Astrophysics Data System (ADS)

    Wang, Yu; He, Hongyan; Tan, Wei; Qi, Wenwen

    2018-03-01

    The light collected from remote sensors taken from space must transit through the Earth's atmosphere. All satellite images are affected at some level by lightwave scattering and absorption from aerosols, water vapor and particulates in the atmosphere. For generating high-quality scientific data, atmospheric correction is required to remove atmospheric effects and to convert digital number (DN) values to surface reflectance (SR). Every optical satellite in orbit observes the earth through the same atmosphere, but each satellite image is impacted differently because atmospheric conditions are constantly changing. A physics-based detailed radiative transfer model 6SV requires a lot of key ancillary information about the atmospheric conditions at the acquisition time. This paper investigates to achieve the simultaneous acquisition of atmospheric radiation parameters based on the multi-spectral information, in order to improve the estimates of surface reflectance through physics-based atmospheric correction. Ancillary information on the aerosol optical depth (AOD) and total water vapor (TWV) derived from the multi-spectral information based on specific spectral properties was used for the 6SV model. The experimentation was carried out on images of Sentinel-2, which carries a Multispectral Instrument (MSI), recording in 13 spectral bands, covering a wide range of wavelengths from 440 up to 2200 nm. The results suggest that per-pixel atmospheric correction through 6SV model, integrating AOD and TWV derived from multispectral information, is better suited for accurate analysis of satellite images and quantitative remote sensing application.

  15. [Research Progress of Multi-Model Medical Image Fusion at Feature Level].

    PubMed

    Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun

    2016-04-01

    Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.

  16. The evaluation of single-view and multi-view fusion 3D echocardiography using image-driven segmentation and tracking.

    PubMed

    Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary

    2011-08-01

    Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Sentinel-2: State of the Image Quality Calibration at the End of the Commissioning

    NASA Astrophysics Data System (ADS)

    Tremas, Thierry; Lonjou, Vincent; Lacherade, Sophie; Gaudel-Vacaresse, Angelique; Languille, Florie

    2016-08-01

    This article summarizes the activity of CNES during the In Orbit Calibration Phase of Sentinel 2A as well as the transfer of production of GIPP (Ground Image Processing Parameters) from CNES to ESRIN. The state of the main calibration parameters and performances, few months before PDGS is declared fully operational, are listed and explained.In radiometry a special attention is paid to the absolute calibration using the on-board diffuser, and the vicarious calibration methods using instrumented or statistically well characterized sites and inter- comparisons with other sensors. Regarding geometry, the presentation focuses on the performances of absolute location with and without reference points. The requirements of multi-band and multi-temporal registration are exposed. Finally, the construction and the rule of the GRI (Ground Reference Images) in the future are explained.

  18. Rice Crop Monitoring Using Microwave and Optical Remotely Sensed Image Data

    NASA Astrophysics Data System (ADS)

    Suga, Y.; Konishi, T.; Takeuchi, S.; Kitano, Y.; Ito, S.

    Hiroshima Institute of Technology HIT is operating the direct down-links of microwave and optical satellite data in Japan This study focuses on the validation for rice crop monitoring using microwave and optical remotely sensed image data acquired by satellites referring to ground truth data such as height of crop ratio of crop vegetation cover and leaf area index in the test sites of Japan ENVISAT-1 ASAR data has a capability to capture regularly and to monitor during the rice growing cycle by alternating cross polarization mode images However ASAR data is influenced by several parameters such as landcover structure direction and alignment of rice crop fields in the test sites In this study the validation was carried out combined with microwave and optical satellite image data and ground truth data regarding rice crop fields to investigate the above parameters Multi-temporal multi-direction descending and ascending and multi-angle ASAR alternating cross polarization mode images were used to investigate rice crop growing cycle LANDSAT data were used to detect landcover structure direction and alignment of rice crop fields corresponding to the backscatter of ASAR As the result of this study it was indicated that rice crop growth can be precisely monitored using multiple remotely sensed data and ground truth data considering with spatial spectral temporal and radiometric resolutions

  19. Cloud information content analysis of multi-angular measurements in the oxygen A-band: application to 3MI and MSPI

    NASA Astrophysics Data System (ADS)

    Merlin, G.; Riedi, J.; Labonnote, L. C.; Cornet, C.; Davis, A. B.; Dubuisson, P.; Desmons, M.; Ferlay, N.; Parol, F.

    2015-12-01

    The vertical distribution of cloud cover has a significant impact on a large number of meteorological and climatic processes. Cloud top altitude and cloud geometrical thickness are then essential. Previous studies established the possibility of retrieving those parameters from multi-angular oxygen A-band measurements. Here we perform a study and comparison of the performances of future instruments. The 3MI (Multi-angle, Multi-channel and Multi-polarization Imager) instrument developed by EUMETSAT, which is an extension of the POLDER/PARASOL instrument, and MSPI (Multi-angles Spectro-Polarimetric Imager) develoloped by NASA's Jet Propulsion Laboratory will measure total and polarized light reflected by the Earth's atmosphere-surface system in several spectral bands (from UV to SWIR) and several viewing geometries. Those instruments should provide opportunities to observe the links between the cloud structures and the anisotropy of the reflected solar radiation into space. Specific algorithms will need be developed in order to take advantage of the new capabilities of this instrument. However, prior to this effort, we need to understand, through a theoretical Shannon information content analysis, the limits and advantages of these new instruments for retrieving liquid and ice cloud properties, and especially, in this study, the amount of information coming from the A-Band channel on the cloud top altitude (CTOP) and geometrical thickness (CGT). We compare the information content of 3MI A-Band in two configurations and that of MSPI. Quantitative information content estimates show that the retrieval of CTOP with a high accuracy is possible in almost all cases investigated. The retrieval of CGT seems less easy but possible for optically thick clouds above a black surface, at least when CGT > 1-2 km.

  20. Photogrammetry and ballistic analysis of a high-flying projectile in the STS-124 space shuttle launch

    NASA Astrophysics Data System (ADS)

    Metzger, Philip T.; Lane, John E.; Carilli, Robert A.; Long, Jason M.; Shawn, Kathy L.

    2010-07-01

    A method combining photogrammetry with ballistic analysis is demonstrated to identify flying debris in a rocket launch environment. Debris traveling near the STS-124 Space Shuttle was captured on cameras viewing the launch pad within the first few seconds after launch. One particular piece of debris caught the attention of investigators studying the release of flame trench fire bricks because its high trajectory could indicate a flight risk to the Space Shuttle. Digitized images from two pad perimeter high-speed 16-mm film cameras were processed using photogrammetry software based on a multi-parameter optimization technique. Reference points in the image were found from 3D CAD models of the launch pad and from surveyed points on the pad. The three-dimensional reference points were matched to the equivalent two-dimensional camera projections by optimizing the camera model parameters using a gradient search optimization technique. Using this method of solving the triangulation problem, the xyz position of the object's path relative to the reference point coordinate system was found for every set of synchronized images. This trajectory was then compared to a predicted trajectory while performing regression analysis on the ballistic coefficient and other parameters. This identified, with a high degree of confidence, the object's material density and thus its probable origin within the launch pad environment. Future extensions of this methodology may make it possible to diagnose the underlying causes of debris-releasing events in near-real time, thus improving flight safety.

  1. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  2. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  3. Influence of sample preparation and reliability of automated numerical refocusing in stain-free analysis of dissected tissues with quantitative phase digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Kemper, Björn; Lenz, Philipp; Bettenworth, Dominik; Krausewitz, Philipp; Domagk, Dirk; Ketelhut, Steffi

    2015-05-01

    Digital holographic microscopy (DHM) has been demonstrated to be a versatile tool for high resolution non-destructive quantitative phase imaging of surfaces and multi-modal minimally-invasive monitoring of living cell cultures in-vitro. DHM provides quantitative monitoring of physiological processes through functional imaging and structural analysis which, for example, gives new insight into signalling of cellular water permeability and cell morphology changes due to toxins and infections. Also the analysis of dissected tissues quantitative DHM phase contrast prospects application fields by stain-free imaging and the quantification of tissue density changes. We show that DHM allows imaging of different tissue layers with high contrast in unstained tissue sections. As the investigation of fixed samples represents a very important application field in pathology, we also analyzed the influence of the sample preparation. The retrieved data demonstrate that the quality of quantitative DHM phase images of dissected tissues depends strongly on the fixing method and common staining agents. As in DHM the reconstruction is performed numerically, multi-focus imaging is achieved from a single digital hologram. Thus, we evaluated the automated refocussing feature of DHM for application on different types of dissected tissues and revealed that on moderately stained samples highly reproducible holographic autofocussing can be achieved. Finally, it is demonstrated that alterations of the spatial refractive index distribution in murine and human tissue samples represent a reliable absolute parameter that is related of different degrees of inflammation in experimental colitis and Crohn's disease. This paves the way towards the usage of DHM in digital pathology for automated histological examinations and further studies to elucidate the translational potential of quantitative phase microscopy for the clinical management of patients, e.g., with inflammatory bowel disease.

  4. [New methods for the evaluation of bone quality. Assessment of bone structural property using imaging.

    PubMed

    Ito, Masako

    Structural property of bone includes micro- or nano-structural property of the trabecular and cortical bone, and macroscopic geometry. Radiological technique is useful to analyze the bone structural property;multi-detector row CT(MDCT)or high-resolution peripheral QCT(HR-pQCT)is available to analyze human bone in vivo . For the analysis of hip geometry, CT-based hip structure analysis(HSA)is available as well as DXA-based HSA. These structural parameters are related to biomechanical property, and these assessment tools provide information of pathological changes or the effects of anti-osteoporotic agents on bone.

  5. Model-based recovery of histological parameters from multispectral images of the colon

    NASA Astrophysics Data System (ADS)

    Hidovic-Rowe, Dzena; Claridge, Ela

    2005-04-01

    Colon cancer alters the macroarchitecture of the colon tissue. Common changes include angiogenesis and the distortion of the tissue collagen matrix. Such changes affect the colon colouration. This paper presents the principles of a novel optical imaging method capable of extracting parameters depicting histological quantities of the colon. The method is based on a computational, physics-based model of light interaction with tissue. The colon structure is represented by three layers: mucosa, submucosa and muscle layer. Optical properties of the layers are defined by molar concentration and absorption coefficients of haemoglobins; the size and density of collagen fibres; the thickness of the layer and the refractive indexes of collagen and the medium. Using the entire histologically plausible ranges for these parameters, a cross-reference is created computationally between the histological quantities and the associated spectra. The output of the model was compared to experimental data acquired in vivo from 57 histologically confirmed normal and abnormal tissue samples and histological parameters were extracted. The model produced spectra which match well the measured data, with the corresponding spectral parameters being well within histologically plausible ranges. Parameters extracted for the abnormal spectra showed the increase in blood volume fraction and changes in collagen pattern characteristic of the colon cancer. The spectra extracted from multi-spectral images of ex-vivo colon including adenocarcinoma show the characteristic features associated with normal and abnormal colon tissue. These findings suggest that it should be possible to compute histological quantities for the colon from the multi-spectral images.

  6. Extreme multistability analysis of memristor-based chaotic system and its application in image decryption

    NASA Astrophysics Data System (ADS)

    Li, Chuang; Min, Fuhong; Jin, Qiusen; Ma, Hanyuan

    2017-12-01

    An active charge-controlled memristive Chua's circuit is implemented, and its basic properties are analyzed. Firstly, with the system trajectory starting from an equilibrium point, the dynamic behavior of multiple coexisting attractors depending on the memristor initial value and the system parameter is studied, which shows the coexisting behaviors of point, period, chaos, and quasic-period. Secondly, with the system motion starting from a non-equilibrium point, the dynamics of extreme multistability in a wide initial value domain are easily conformed by new analytical methods. Furthermore, the simulation results indicate that some strange chaotic attractors like multi-wing type and multi-scroll type are observed when the observed signals are extended from voltage and current to power and energy, respectively. Specially, when different initial conditions are taken, the coexisting strange chaotic attractors between the power and energy signals are exhibited. Finally, the chaotic sequences of the new system are used for encrypting color image to protect image information security. The encryption performance is analyzed by statistic histogram, correlation, key spaces and key sensitivity. Simulation results show that the new memristive chaotic system has high security in color image encryption.

  7. Multi-resolution statistical image reconstruction for mitigation of truncation effects: application to cone-beam CT of the head

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Webster Stayman, J.; Sisniega, Alejandro; Zbijewski, Wojciech; Xu, Jennifer; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-01-01

    A prototype cone-beam CT (CBCT) head scanner featuring model-based iterative reconstruction (MBIR) has been recently developed and demonstrated the potential for reliable detection of acute intracranial hemorrhage (ICH), which is vital to diagnosis of traumatic brain injury and hemorrhagic stroke. However, data truncation (e.g. due to the head holder) can result in artifacts that reduce image uniformity and challenge ICH detection. We propose a multi-resolution MBIR method with an extended reconstruction field of view (RFOV) to mitigate truncation effects in CBCT of the head. The image volume includes a fine voxel size in the (inner) nontruncated region and a coarse voxel size in the (outer) truncated region. This multi-resolution scheme allows extension of the RFOV to mitigate truncation effects while introducing minimal increase in computational complexity. The multi-resolution method was incorporated in a penalized weighted least-squares (PWLS) reconstruction framework previously developed for CBCT of the head. Experiments involving an anthropomorphic head phantom with truncation due to a carbon-fiber holder were shown to result in severe artifacts in conventional single-resolution PWLS, whereas extending the RFOV within the multi-resolution framework strongly reduced truncation artifacts. For the same extended RFOV, the multi-resolution approach reduced computation time compared to the single-resolution approach (viz. time reduced by 40.7%, 83.0%, and over 95% for an image volume of 6003, 8003, 10003 voxels). Algorithm parameters (e.g. regularization strength, the ratio of the fine and coarse voxel size, and RFOV size) were investigated to guide reliable parameter selection. The findings provide a promising method for truncation artifact reduction in CBCT and may be useful for other MBIR methods and applications for which truncation is a challenge.

  8. Dense image matching of terrestrial imagery for deriving high-resolution topographic properties of vegetation locations in alpine terrain

    NASA Astrophysics Data System (ADS)

    Niederheiser, R.; Rutzinger, M.; Bremer, M.; Wichmann, V.

    2018-04-01

    The investigation of changes in spatial patterns of vegetation and identification of potential micro-refugia requires detailed topographic and terrain information. However, mapping alpine topography at very detailed scales is challenging due to limited accessibility of sites. Close-range sensing by photogrammetric dense matching approaches based on terrestrial images captured with hand-held cameras offers a light-weight and low-cost solution to retrieve high-resolution measurements even in steep terrain and at locations, which are difficult to access. We propose a novel approach for rapid capturing of terrestrial images and a highly automated processing chain for retrieving detailed dense point clouds for topographic modelling. For this study, we modelled 249 plot locations. For the analysis of vegetation distribution and location properties, topographic parameters, such as slope, aspect, and potential solar irradiation were derived by applying a multi-scale approach utilizing voxel grids and spherical neighbourhoods. The result is a micro-topography archive of 249 alpine locations that includes topographic parameters at multiple scales ready for biogeomorphological analysis. Compared with regional elevation models at larger scales and traditional 2D gridding approaches to create elevation models, we employ analyses in a fully 3D environment that yield much more detailed insights into interrelations between topographic parameters, such as potential solar irradiation, surface area, aspect and roughness.

  9. Storage and retrieval of digital images in dermatology.

    PubMed

    Bittorf, A; Krejci-Papa, N C; Diepgen, T L

    1995-11-01

    Differential diagnosis in dermatology relies on the interpretation of visual information in the form of clinical and histopathological images. Up until now, reference images have had to be retrieved from textbooks and/or appropriate journals. To overcome inherent limitations of those storage media with respect to the number of images stored, display, and search parameters available, we designed a computer-based database of digitized dermatologic images. Images were taken from the photo archive of the Dermatological Clinic of the University of Erlangen. A database was designed using the Entity-Relationship approach. It was implemented on a PC-Windows platform using MS Access* and MS Visual Basic®. As WWW-server a Sparc 10 workstation was used with the CERN Hypertext-Transfer-Protocol-Daemon (httpd) 3.0 pre 6 software running. For compressed storage on a hard drive, a quality factor of 60 allowed on-screen differential diagnosis and corresponded to a compression factor of 1:35 for clinical images and 1:40 for histopathological images. Hierarchical keys of clinical or histopathological criteria permitted multi-criteria searches. A script using the Common Gateway Interface (CGI) enabled remote search and image retrieval via the World-Wide-Web (W3). A dermatologic image database, featurig clinical and histopathological images was constructed which allows for multi-parameter searches and world-wide remote access.

  10. The algorithm of motion blur image restoration based on PSF half-blind estimation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ke; Lin, Zhe

    2011-08-01

    A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.

  11. Oil Spill Detection and Tracking Using Lipschitz Regularity and Multiscale Techniques in Synthetic Aperture Radar Imagery

    NASA Astrophysics Data System (ADS)

    Ajadi, O. A.; Meyer, F. J.

    2014-12-01

    Automatic oil spill detection and tracking from Synthetic Aperture Radar (SAR) images is a difficult task, due in large part to the inhomogeneous properties of the sea surface, the high level of speckle inherent in SAR data, the complexity and the highly non-Gaussian nature of amplitude information, and the low temporal sampling that is often achieved with SAR systems. This research presents a promising new oil spill detection and tracking method that is based on time series of SAR images. Through the combination of a number of advanced image processing techniques, the develop approach is able to mitigate some of these previously mentioned limitations of SAR-based oil-spill detection and enables fully automatic spill detection and tracking across a wide range of spatial scales. The method combines an initial automatic texture analysis with a consecutive change detection approach based on multi-scale image decomposition. The first step of the approach, a texture transformation of the original SAR images, is performed in order to normalize the ocean background and enhance the contrast between oil-covered and oil-free ocean surfaces. The Lipschitz regularity (LR), a local texture parameter, is used here due to its proven ability to normalize the reflectivity properties of ocean water and maximize the visibly of oil in water. To calculate LR, the images are decomposed using two-dimensional continuous wavelet transform (2D-CWT), and transformed into Holder space to measure LR. After texture transformation, the now normalized images are inserted into our multi-temporal change detection algorithm. The multi-temporal change detection approach is a two-step procedure including (1) data enhancement and filtering and (2) multi-scale automatic change detection. The performance of the developed approach is demonstrated by an application to oil spill areas in the Gulf of Mexico. In this example, areas affected by oil spills were identified from a series of ALOS PALSAR images acquired in 2010. The comparison showed exceptional performance of our method. This method can be applied to emergency management and decision support systems with a need for real-time data, and it shows great potential for rapid data analysis in other areas, including volcano detection, flood boundaries, forest health, and wildfires.

  12. Buildings Change Detection Based on Shape Matching for Multi-Resolution Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Abdessetar, M.; Zhong, Y.

    2017-09-01

    Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object's shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T0 to shape T1 and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).

  13. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    PubMed

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Quantitative assessment of ischemia and reactive hyperemia of the dermal layers using multi - spectral imaging on the human arm

    NASA Astrophysics Data System (ADS)

    Kainerstorfer, Jana M.; Amyot, Franck; Demos, Stavros G.; Hassan, Moinuddin; Chernomordik, Victor; Hitzenberger, Christoph K.; Gandjbakhche, Amir H.; Riley, Jason D.

    2009-07-01

    Quantitative assessment of skin chromophores in a non-invasive fashion is often desirable. Especially pixel wise assessment of blood volume and blood oxygenation is beneficial for improved diagnostics. We utilized a multi-spectral imaging system for acquiring diffuse reflectance images of healthy volunteers' lower forearm. Ischemia and reactive hyperemia was introduced by occluding the upper arm with a pressure cuff for 5min with 180mmHg. Multi-spectral images were taken every 30s, before, during and after occlusion. Image reconstruction for blood volume and blood oxygenation was performed, using a two layered skin model. As the images were taken in a non-contact way, strong artifacts related to the shape (curvature) of the arms were observed, making reconstruction of optical / physiological parameters highly inaccurate. We developed a curvature correction method, which is based on extracting the curvature directly from the intensity images acquired and does not require any additional measures on the object imaged. The effectiveness of the algorithm was demonstrated, on reconstruction results of blood volume and blood oxygenation for in vivo data during occlusion of the arm. Pixel wise assessment of blood volume and blood oxygenation was made possible over the entire image area and comparison of occlusion effects between veins and surrounding skin was performed. Induced ischemia during occlusion and reactive hyperemia afterwards was observed and quantitatively assessed. Furthermore, the influence of epidermal thickness on reconstruction results was evaluated and the exact knowledge of this parameter for fully quantitative assessment was pointed out.

  15. Multi-level tree analysis of pulmonary artery/vein trees in non-contrast CT images

    NASA Astrophysics Data System (ADS)

    Gao, Zhiyun; Grout, Randall W.; Hoffman, Eric A.; Saha, Punam K.

    2012-02-01

    Diseases like pulmonary embolism and pulmonary hypertension are associated with vascular dystrophy. Identifying such pulmonary artery/vein (A/V) tree dystrophy in terms of quantitative measures via CT imaging significantly facilitates early detection of disease or a treatment monitoring process. A tree structure, consisting of nodes and connected arcs, linked to the volumetric representation allows multi-level geometric and volumetric analysis of A/V trees. Here, a new theory and method is presented to generate multi-level A/V tree representation of volumetric data and to compute quantitative measures of A/V tree geometry and topology at various tree hierarchies. The new method is primarily designed on arc skeleton computation followed by a tree construction based topologic and geometric analysis of the skeleton. The method starts with a volumetric A/V representation as input and generates its topologic and multi-level volumetric tree representations long with different multi-level morphometric measures. A new recursive merging and pruning algorithms are introduced to detect bad junctions and noisy branches often associated with digital geometric and topologic analysis. Also, a new notion of shortest axial path is introduced to improve the skeletal arc joining two junctions. The accuracy of the multi-level tree analysis algorithm has been evaluated using computer generated phantoms and pulmonary CT images of a pig vessel cast phantom while the reproducibility of method is evaluated using multi-user A/V separation of in vivo contrast-enhanced CT images of a pig lung at different respiratory volumes.

  16. Harmonic regression based multi-temporal cloud filtering algorithm for Landsat 8

    NASA Astrophysics Data System (ADS)

    Joshi, P.

    2015-12-01

    Landsat data archive though rich is seen to have missing dates and periods owing to the weather irregularities and inconsistent coverage. The satellite images are further subject to cloud cover effects resulting in erroneous analysis and observations of ground features. In earlier studies the change detection algorithm using statistical control charts on harmonic residuals of multi-temporal Landsat 5 data have been shown to detect few prominent remnant clouds [Brooks, Evan B., et al, 2014]. So, in this work we build on this harmonic regression approach to detect and filter clouds using a multi-temporal series of Landsat 8 images. Firstly, we compute the harmonic coefficients using the fitting models on annual training data. This time series of residuals is further subjected to Shewhart X-bar control charts which signal the deviations of cloud points from the fitted multi-temporal fourier curve. For the process with standard deviation σ we found the second and third order harmonic regression with a x-bar chart control limit [Lσ] ranging between [0.5σ < Lσ < σ] as most efficient in detecting clouds. By implementing second order harmonic regression with successive x-bar chart control limits of L and 0.5 L on the NDVI, NDSI and haze optimized transformation (HOT), and utilizing the seasonal physical properties of these parameters, we have designed a novel multi-temporal algorithm for filtering clouds from Landsat 8 images. The method is applied to Virginia and Alabama in Landsat8 UTM zones 17 and 16 respectively. Our algorithm efficiently filters all types of cloud cover with an overall accuracy greater than 90%. As a result of the multi-temporal operation and the ability to recreate the multi-temporal database of images using only the coefficients of the fourier regression, our algorithm is largely storage and time efficient. The results show a good potential for this multi-temporal approach for cloud detection as a timely and targeted solution for the Landsat 8 research community, catering to the need for innovative processing solutions in the infant stage of the satellite.

  17. Assessment of mechanical properties of isolated bovine intervertebral discs from multi-parametric magnetic resonance imaging.

    PubMed

    Recuerda, Maximilien; Périé, Delphine; Gilbert, Guillaume; Beaudoin, Gilles

    2012-10-12

    The treatment planning of spine pathologies requires information on the rigidity and permeability of the intervertebral discs (IVDs). Magnetic resonance imaging (MRI) offers great potential as a sensitive and non-invasive technique for describing the mechanical properties of IVDs. However, the literature reported small correlation coefficients between mechanical properties and MRI parameters. Our hypothesis is that the compressive modulus and the permeability of the IVD can be predicted by a linear combination of MRI parameters. Sixty IVDs were harvested from bovine tails, and randomly separated in four groups (in-situ, digested-6h, digested-18h, digested-24h). Multi-parametric MRI acquisitions were used to quantify the relaxation times T1 and T2, the magnetization transfer ratio MTR, the apparent diffusion coefficient ADC and the fractional anisotropy FA. Unconfined compression, confined compression and direct permeability measurements were performed to quantify the compressive moduli and the hydraulic permeabilities. Differences between groups were evaluated from a one way ANOVA. Multi linear regressions were performed between dependent mechanical properties and independent MRI parameters to verify our hypothesis. A principal component analysis was used to convert the set of possibly correlated variables into a set of linearly uncorrelated variables. Agglomerative Hierarchical Clustering was performed on the 3 principal components. Multilinear regressions showed that 45 to 80% of the Young's modulus E, the aggregate modulus in absence of deformation HA0, the radial permeability kr and the axial permeability in absence of deformation k0 can be explained by the MRI parameters within both the nucleus pulposus and the annulus pulposus. The principal component analysis reduced our variables to two principal components with a cumulative variability of 52-65%, which increased to 70-82% when considering the third principal component. The dendograms showed a natural division into four clusters for the nucleus pulposus and into three or four clusters for the annulus fibrosus. The compressive moduli and the permeabilities of isolated IVDs can be assessed mostly by MT and diffusion sequences. However, the relationships have to be improved with the inclusion of MRI parameters more sensitive to IVD degeneration. Before the use of this technique to quantify the mechanical properties of IVDs in vivo on patients suffering from various diseases, the relationships have to be defined for each degeneration state of the tissue that mimics the pathology. Our MRI protocol associated to principal component analysis and agglomerative hierarchical clustering are promising tools to classify the degenerated intervertebral discs and further find biomarkers and predictive factors of the evolution of the pathologies.

  18. A programmable light engine for quantitative single molecule TIRF and HILO imaging.

    PubMed

    van 't Hoff, Marcel; de Sars, Vincent; Oheim, Martin

    2008-10-27

    We report on a simple yet powerful implementation of objective-type total internal reflection fluorescence (TIRF) and highly inclined and laminated optical sheet (HILO, a type of dark-field) illumination. Instead of focusing the illuminating laser beam to a single spot close to the edge of the microscope objective, we are scanning during the acquisition of a fluorescence image the focused spot in a circular orbit, thereby illuminating the sample from various directions. We measure parameters relevant for quantitative image analysis during fluorescence image acquisition by capturing an image of the excitation light distribution in an equivalent objective backfocal plane (BFP). Operating at scan rates above 1 MHz, our programmable light engine allows directional averaging by circular spinning the spot even for sub-millisecond exposure times. We show that restoring the symmetry of TIRF/HILO illumination reduces scattering and produces an evenly lit field-of-view that affords on-line analysis of evanescnt-field excited fluorescence without pre-processing. Utilizing crossed acousto-optical deflectors, our device generates arbitrary intensity profiles in BFP, permitting variable-angle, multi-color illumination, or objective lenses to be rapidly exchanged.

  19. Feature-based Alignment of Volumetric Multi-modal Images

    PubMed Central

    Toews, Matthew; Zöllei, Lilla; Wells, William M.

    2014-01-01

    This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955

  20. A Bayesian approach to distinguishing interdigitated tongue muscles from limited diffusion magnetic resonance imaging.

    PubMed

    Ye, Chuyang; Murano, Emi; Stone, Maureen; Prince, Jerry L

    2015-10-01

    The tongue is a critical organ for a variety of functions, including swallowing, respiration, and speech. It contains intrinsic and extrinsic muscles that play an important role in changing its shape and position. Diffusion tensor imaging (DTI) has been used to reconstruct tongue muscle fiber tracts. However, previous studies have been unable to reconstruct the crossing fibers that occur where the tongue muscles interdigitate, which is a large percentage of the tongue volume. To resolve crossing fibers, multi-tensor models on DTI and more advanced imaging modalities, such as high angular resolution diffusion imaging (HARDI) and diffusion spectrum imaging (DSI), have been proposed. However, because of the involuntary nature of swallowing, there is insufficient time to acquire a sufficient number of diffusion gradient directions to resolve crossing fibers while the in vivo tongue is in a fixed position. In this work, we address the challenge of distinguishing interdigitated tongue muscles from limited diffusion magnetic resonance imaging by using a multi-tensor model with a fixed tensor basis and incorporating prior directional knowledge. The prior directional knowledge provides information on likely fiber directions at each voxel, and is computed with anatomical knowledge of tongue muscles. The fiber directions are estimated within a maximum a posteriori (MAP) framework, and the resulting objective function is solved using a noise-aware weighted ℓ1-norm minimization algorithm. Experiments were performed on a digital crossing phantom and in vivo tongue diffusion data including three control subjects and four patients with glossectomies. On the digital phantom, effects of parameters, noise, and prior direction accuracy were studied, and parameter settings for real data were determined. The results on the in vivo data demonstrate that the proposed method is able to resolve interdigitated tongue muscles with limited gradient directions. The distributions of the computed fiber directions in both the controls and the patients were also compared, suggesting a potential clinical use for this imaging and image analysis methodology. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. MultiSpec—a tool for multispectral hyperspectral image data analysis

    NASA Astrophysics Data System (ADS)

    Biehl, Larry; Landgrebe, David

    2002-12-01

    MultiSpec is a multispectral image data analysis software application. It is intended to provide a fast, easy-to-use means for analysis of multispectral image data, such as that from the Landsat, SPOT, MODIS or IKONOS series of Earth observational satellites, hyperspectral data such as that from the Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) and EO-1 Hyperion satellite system or the data that will be produced by the next generation of Earth observational sensors. The primary purpose for the system was to make new, otherwise complex analysis tools available to the general Earth science community. It has also found use in displaying and analyzing many other types of non-space related digital imagery, such as medical image data and in K-12 and university level educational activities. MultiSpec has been implemented for both the Apple Macintosh ® and Microsoft Windows ® operating systems (OS). The effort was first begun on the Macintosh OS in 1988. The GLOBE ( http://www.globe.gov) program supported the development of a subset of MultiSpec for the Windows OS in 1995. Since then most (but not all) of the features in the Macintosh OS version have been ported to the Windows OS version. Although copyrighted, MultiSpec with its documentation is distributed without charge. The Macintosh and Windows versions and documentation on its use are available from the World Wide Web at URL: http://dynamo.ecn.purdue.edu/˜biehl/MultiSpec/ MultiSpec is copyrighted (1991-2001) by Purdue Research Foundation, West Lafayette, Indiana 47907.

  2. Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.

    PubMed

    Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang

    2017-01-01

    Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.

  3. Multi-exponential analysis of magnitude MR images using a quantitative multispectral edge-preserving filter.

    PubMed

    Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre

    2003-03-01

    A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.

  4. Simultaneously extracting multiple parameters via multi-distance and multi-exposure diffuse speckle contrast analysis

    PubMed Central

    Liu, Jialin; Zhang, Hongchao; Lu, Jian; Ni, Xiaowu; Shen, Zhonghua

    2017-01-01

    Recent advancements in diffuse speckle contrast analysis (DSCA) have opened the path for noninvasive acquisition of deep tissue microvasculature blood flow. In fact, in addition to blood flow index αDB, the variations of tissue optical absorption μa, reduced scattering coefficients μs′, as well as coherence factor β can modulate temporal fluctuations of speckle patterns. In this study, we use multi-distance and multi-exposure DSCA (MDME-DSCA) to simultaneously extract multiple parameters such as μa, μs′, αDB, and β. The validity of MDME-DSCA has been validated by the simulated data and phantoms experiments. Moreover, as a comparison, the results also show that it is impractical to simultaneously obtain multiple parameters by multi-exposure DSCA (ME-DSCA). PMID:29082083

  5. Object-oriented crop mapping and monitoring using multi-temporal polarimetric RADARSAT-2 data

    NASA Astrophysics Data System (ADS)

    Jiao, Xianfeng; Kovacs, John M.; Shang, Jiali; McNairn, Heather; Walters, Dan; Ma, Baoluo; Geng, Xiaoyuan

    2014-10-01

    The aim of this paper is to assess the accuracy of an object-oriented classification of polarimetric Synthetic Aperture Radar (PolSAR) data to map and monitor crops using 19 RADARSAT-2 fine beam polarimetric (FQ) images of an agricultural area in North-eastern Ontario, Canada. Polarimetric images and field data were acquired during the 2011 and 2012 growing seasons. The classification and field data collection focused on the main crop types grown in the region, which include: wheat, oat, soybean, canola and forage. The polarimetric parameters were extracted with PolSAR analysis using both the Cloude-Pottier and Freeman-Durden decompositions. The object-oriented classification, with a single date of PolSAR data, was able to classify all five crop types with an accuracy of 95% and Kappa of 0.93; a 6% improvement in comparison with linear-polarization only classification. However, the time of acquisition is crucial. The larger biomass crops of canola and soybean were most accurately mapped, whereas the identification of oat and wheat were more variable. The multi-temporal data using the Cloude-Pottier decomposition parameters provided the best classification accuracy compared to the linear polarizations and the Freeman-Durden decomposition parameters. In general, the object-oriented classifications were able to accurately map crop types by reducing the noise inherent in the SAR data. Furthermore, using the crop classification maps we were able to monitor crop growth stage based on a trend analysis of the radar response. Based on field data from canola crops, there was a strong relationship between the phenological growth stage based on the BBCH scale, and the HV backscatter and entropy.

  6. Seismic data enhancement and regularization using finite offset Common Diffraction Surface (CDS) stack

    NASA Astrophysics Data System (ADS)

    Garabito, German; Cruz, João Carlos Ribeiro; Oliva, Pedro Andrés Chira; Söllner, Walter

    2017-01-01

    The Common Reflection Surface stack is a robust method for simulating zero-offset and common-offset sections with high accuracy from multi-coverage seismic data. For simulating common-offset sections, the Common-Reflection-Surface stack method uses a hyperbolic traveltime approximation that depends on five kinematic parameters for each selected sample point of the common-offset section to be simulated. The main challenge of this method is to find a computationally efficient data-driven optimization strategy for accurately determining the five kinematic stacking parameters on which each sample of the stacked common-offset section depends. Several authors have applied multi-step strategies to obtain the optimal parameters by combining different pre-stack data configurations. Recently, other authors used one-step data-driven strategies based on a global optimization for estimating simultaneously the five parameters from multi-midpoint and multi-offset gathers. In order to increase the computational efficiency of the global optimization process, we use in this paper a reduced form of the Common-Reflection-Surface traveltime approximation that depends on only four parameters, the so-called Common Diffraction Surface traveltime approximation. By analyzing the convergence of both objective functions and the data enhancement effect after applying the two traveltime approximations to the Marmousi synthetic dataset and a real land dataset, we conclude that the Common-Diffraction-Surface approximation is more efficient within certain aperture limits and preserves at the same time a high image accuracy. The preserved image quality is also observed in a direct comparison after applying both approximations for simulating common-offset sections on noisy pre-stack data.

  7. Imaging Study of Multi-Crystalline Silicon Wafers Throughout the Manufacturing Process: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, S.; Yan, F.; Zaunbracher, K.

    2011-07-01

    Imaging techniques are applied to multi-crystalline silicon bricks, wafers at various process steps, and finished solar cells. Photoluminescence (PL) imaging is used to characterize defects and material quality on bricks and wafers. Defect regions within the wafers are influenced by brick position within an ingot and height within the brick. The defect areas in as-cut wafers are compared to imaging results from reverse-bias electroluminescence and dark lock-in thermography and cell parameters of near-neighbor finished cells. Defect areas are also characterized by defect band emissions. The defect areas measured by these techniques on as-cut wafers are shown to correlate to finishedmore » cell performance.« less

  8. Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.

    PubMed

    Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A

    2015-12-01

    We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Neural analysis of bovine ovaries ultrasound images in the identification process of the corpus luteum

    NASA Astrophysics Data System (ADS)

    Górna, K.; Jaśkowski, B. M.; Okoń, P.; Czechlowski, M.; Koszela, K.; Zaborowicz, M.; Idziaszek, P.

    2017-07-01

    The aim of the paper is to shown the neural image analysis as a method useful for identifying the development stage of the domestic bovine corpus luteum on digital USG (UltraSonoGraphy) images. Corpus luteum (CL) is a transient endocrine gland that develops after ovulation from the follicle secretory cells. The aim of CL is the production of progesterone, which regulates many reproductive functions. In the presented studies, identification of the corpus luteum was carried out on the basis of information contained in ultrasound digital images. Development stage of the corpus luteum was considered in two aspects: just before and middle of domination phase and luteolysis and degradation phase. Prior to the classification, the ultrasound images have been processed using a GLCM (Gray Level Co-occurence Matrix). To generate a classification model, a Neural Networks module implemented in the STATISTICA was used. Five representative parameters describing the ultrasound image were used as learner variables. On the output of the artificial neural network was generated information about the development stage of the corpus luteum. Results of this study indicate that neural image analysis combined with GLCM texture analysis may be a useful tool for identifying the bovine corpus luteum in the context of its development phase. Best-generated artificial neural network model was the structure of MLP (Multi Layer Perceptron) 5:5-17-1:1.

  10. [A Method to Reconstruct Surface Reflectance Spectrum from Multispectral Image Based on Canopy Radiation Transfer Model].

    PubMed

    Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li

    2015-07-01

    Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.

  11. The MCIC collection: a shared repository of multi-modal, multi-site brain image data from a clinical investigation of schizophrenia

    PubMed Central

    Gollub, Randy L.; Shoemaker, Jody M.; King, Margaret D.; White, Tonya; Ehrlich, Stefan; Sponheim, Scott R.; Clark, Vincent P.; Turner, Jessica A.; Mueller, Bryon A.; Magnotta, Vince; O’Leary, Daniel; Ho, Beng C.; Brauns, Stefan; Manoach, Dara S.; Seidman, Larry; Bustillo, Juan R.; Lauriello, John; Bockholt, Jeremy; Lim, Kelvin O.; Rosen, Bruce R.; Schulz, S. Charles; Calhoun, Vince D.; Andreasen, Nancy C.

    2013-01-01

    Expertly collected, well-curated data sets consisting of comprehensive clinical characterization and raw structural, functional and diffusion-weighted DICOM images in schizophrenia patients and sex and age-matched controls are now accessible to the scientific community through an on-line data repository (coins.mrn.org). The Mental Illness and Neuroscience Discovery Institute, now the Mind Research Network (MRN, www.mrn.org), comprised of investigators at the University of New Mexico, the University of Minnesota, Massachusetts General Hospital, and the University of Iowa, conducted a cross-sectional study to identify quantitative neuroimaging biomarkers of schizophrenia. Data acquisition across multiple sites permitted the integration and cross-validation of clinical, cognitive, morphometric, and functional neuroimaging results gathered from unique samples of schizophrenia patients and controls using a common protocol across sites. Particular effort was made to recruit patients early in the course of their illness, at the onset of their symptoms. There is a relatively even sampling of illness duration in chronic patients. This data repository will be useful to 1) scientists who can study schizophrenia by further analysis of this cohort and/or by pooling with other data; 2) computer scientists and software algorithm developers for testing and validating novel registration, segmentation, and other analysis software; and 3) educators in the fields of neuroimaging, medical image analysis and medical imaging informatics who need exemplar data sets for courses and workshops. Sharing provides the opportunity for independent replication of already published results from this data set and novel exploration. This manuscript describes the inclusion/exclusion criteria, imaging parameters and other information that will assist those wishing to use this data repository. PMID:23760817

  12. The MCIC collection: a shared repository of multi-modal, multi-site brain image data from a clinical investigation of schizophrenia.

    PubMed

    Gollub, Randy L; Shoemaker, Jody M; King, Margaret D; White, Tonya; Ehrlich, Stefan; Sponheim, Scott R; Clark, Vincent P; Turner, Jessica A; Mueller, Bryon A; Magnotta, Vince; O'Leary, Daniel; Ho, Beng C; Brauns, Stefan; Manoach, Dara S; Seidman, Larry; Bustillo, Juan R; Lauriello, John; Bockholt, Jeremy; Lim, Kelvin O; Rosen, Bruce R; Schulz, S Charles; Calhoun, Vince D; Andreasen, Nancy C

    2013-07-01

    Expertly collected, well-curated data sets consisting of comprehensive clinical characterization and raw structural, functional and diffusion-weighted DICOM images in schizophrenia patients and sex and age-matched controls are now accessible to the scientific community through an on-line data repository (coins.mrn.org). The Mental Illness and Neuroscience Discovery Institute, now the Mind Research Network (MRN, http://www.mrn.org/ ), comprised of investigators at the University of New Mexico, the University of Minnesota, Massachusetts General Hospital, and the University of Iowa, conducted a cross-sectional study to identify quantitative neuroimaging biomarkers of schizophrenia. Data acquisition across multiple sites permitted the integration and cross-validation of clinical, cognitive, morphometric, and functional neuroimaging results gathered from unique samples of schizophrenia patients and controls using a common protocol across sites. Particular effort was made to recruit patients early in the course of their illness, at the onset of their symptoms. There is a relatively even sampling of illness duration in chronic patients. This data repository will be useful to 1) scientists who can study schizophrenia by further analysis of this cohort and/or by pooling with other data; 2) computer scientists and software algorithm developers for testing and validating novel registration, segmentation, and other analysis software; and 3) educators in the fields of neuroimaging, medical image analysis and medical imaging informatics who need exemplar data sets for courses and workshops. Sharing provides the opportunity for independent replication of already published results from this data set and novel exploration. This manuscript describes the inclusion/exclusion criteria, imaging parameters and other information that will assist those wishing to use this data repository.

  13. MULTI-SOURCE FEATURE LEARNING FOR JOINT ANALYSIS OF INCOMPLETE MULTIPLE HETEROGENEOUS NEUROIMAGING DATA

    PubMed Central

    Yuan, Lei; Wang, Yalin; Thompson, Paul M.; Narayan, Vaibhav A.; Ye, Jieping

    2012-01-01

    Analysis of incomplete data is a big challenge when integrating large-scale brain imaging datasets from different imaging modalities. In the Alzheimer’s Disease Neuroimaging Initiative (ADNI), for example, over half of the subjects lack cerebrospinal fluid (CSF) measurements; an independent half of the subjects do not have fluorodeoxyglucose positron emission tomography (FDG-PET) scans; many lack proteomics measurements. Traditionally, subjects with missing measures are discarded, resulting in a severe loss of available information. In this paper, we address this problem by proposing an incomplete Multi-Source Feature (iMSF) learning method where all the samples (with at least one available data source) can be used. To illustrate the proposed approach, we classify patients from the ADNI study into groups with Alzheimer’s disease (AD), mild cognitive impairment (MCI) and normal controls, based on the multi-modality data. At baseline, ADNI’s 780 participants (172 AD, 397 MCI, 211 NC), have at least one of four data types: magnetic resonance imaging (MRI), FDG-PET, CSF and proteomics. These data are used to test our algorithm. Depending on the problem being solved, we divide our samples according to the availability of data sources, and we learn shared sets of features with state-of-the-art sparse learning methods. To build a practical and robust system, we construct a classifier ensemble by combining our method with four other methods for missing value estimation. Comprehensive experiments with various parameters show that our proposed iMSF method and the ensemble model yield stable and promising results. PMID:22498655

  14. Multi-frequency and polarimetric radar backscatter signatures for discrimination between agricultural crops at the Flevoland experimental test site

    NASA Technical Reports Server (NTRS)

    Freeman, A.; Villasenor, J.; Klein, J. D.

    1991-01-01

    We describe the calibration and analysis of multi-frequency, multi-polarization radar backscatter signatures over an agriculture test site in the Netherlands. The calibration procedure involved two stages: in the first stage, polarimetric and radiometric calibrations (ignoring noise) were carried out using square-base trihedral corner reflector signatures and some properties of the clutter background. In the second stage, a novel algorithm was used to estimate the noise level in the polarimetric data channels by using the measured signature of an idealized rough surface with Bragg scattering (the ocean in this case). This estimated noise level was then used to correct the measured backscatter signatures from the agriculture fields. We examine the significance of several key parameters extracted from the calibrated and noise-corrected backscatter signatures. The significance is assessed in terms of the ability to uniquely separate among classes from 13 different backscatter types selected from the test site data, including eleven different crops, one forest and one ocean area. Using the parameters with the highest separation for a given class, we use a hierarchical algorithm to classify the entire image. We find that many classes, including ocean, forest, potato, and beet, can be identified with high reliability, while the classes for which no single parameter exhibits sufficient separation have higher rates of misclassification. We expect that modified decision criteria involving simultaneous consideration of several parameters increase performance for these classes.

  15. BaTMAn: Bayesian Technique for Multi-image Analysis

    NASA Astrophysics Data System (ADS)

    Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.

    2016-12-01

    Bayesian Technique for Multi-image Analysis (BaTMAn) characterizes any astronomical dataset containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (i.e. identical signal within the errors). The output segmentations successfully adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. BaTMAn identifies (and keeps) all the statistically-significant information contained in the input multi-image (e.g. an IFS datacube). The main aim of the algorithm is to characterize spatially-resolved data prior to their analysis.

  16. Multi-modal magnetic resonance imaging and histology of vascular function in xenografts using macromolecular contrast agent hyperbranched polyglycerol (HPG-GdF).

    PubMed

    Baker, Jennifer H E; McPhee, Kelly C; Moosvi, Firas; Saatchi, Katayoun; Häfeli, Urs O; Minchinton, Andrew I; Reinsberg, Stefan A

    2016-01-01

    Macromolecular gadolinium (Gd)-based contrast agents are in development as blood pool markers for MRI. HPG-GdF is a 583 kDa hyperbranched polyglycerol doubly tagged with Gd and Alexa 647 nm dye, making it both MR and histologically visible. In this study we examined the location of HPG-GdF in whole-tumor xenograft sections matched to in vivo DCE-MR images of both HPG-GdF and Gadovist. Despite its large size, we have shown that HPG-GdF extravasates from some tumor vessels and accumulates over time, but does not distribute beyond a few cell diameters from vessels. Fractional plasma volume (fPV) and apparent permeability-surface area product (aPS) parameters were derived from the MR concentration-time curves of HPG-GdF. Non-viable necrotic tumor tissue was excluded from the analysis by applying a novel bolus arrival time (BAT) algorithm to all voxels. aPS derived from HPG-GdF was the only MR parameter to identify a difference in vascular function between HCT116 and HT29 colorectal tumors. This study is the first to relate low and high molecular weight contrast agents with matched whole-tumor histological sections. These detailed comparisons identified tumor regions that appear distinct from each other using the HPG-GdF biomarkers related to perfusion and vessel leakiness, while Gadovist-imaged parameter measures in the same regions were unable to detect variation in vascular function. We have established HPG-GdF as a biocompatible multi-modal high molecular weight contrast agent with application for examining vascular function in both MR and histological modalities. Copyright © 2015 John Wiley & Sons, Ltd.

  17. A multi-characteristic based algorithm for classifying vegetation in a plateau area: Qinghai Lake watershed, northwestern China

    NASA Astrophysics Data System (ADS)

    Ma, Weiwei; Gong, Cailan; Hu, Yong; Li, Long; Meng, Peng

    2015-10-01

    Remote sensing technology has been broadly recognized for its convenience and efficiency in mapping vegetation, particularly in high-altitude and inaccessible areas where there are lack of in-situ observations. In this study, Landsat Thematic Mapper (TM) images and Chinese environmental mitigation satellite CCD sensor (HJ-1 CCD) images, both of which are at 30m spatial resolution were employed for identifying and monitoring of vegetation types in a area of Western China——Qinghai Lake Watershed(QHLW). A decision classification tree (DCT) algorithm using multi-characteristic including seasonal TM/HJ-1 CCD time series data combined with digital elevation models (DEMs) dataset, and a supervised maximum likelihood classification (MLC) algorithm with single-data TM image were applied vegetation classification. Accuracy of the two algorithms was assessed using field observation data. Based on produced vegetation classification maps, it was found that the DCT using multi-season data and geomorphologic parameters was superior to the MLC algorithm using single-data image, improving the overall accuracy by 11.86% at second class level and significantly reducing the "salt and pepper" noise. The DCT algorithm applied to TM /HJ-1 CCD time series data geomorphologic parameters appeared as a valuable and reliable tool for monitoring vegetation at first class level (5 vegetation classes) and second class level(8 vegetation subclasses). The DCT algorithm using multi-characteristic might provide a theoretical basis and general approach to automatic extraction of vegetation types from remote sensing imagery over plateau areas.

  18. SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Iyengar, P

    2016-06-15

    Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PETmore » and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining all features.« less

  19. Mutual information registration of multi-spectral and multi-resolution images of DigitalGlobe's WorldView-3 imaging satellite

    NASA Astrophysics Data System (ADS)

    Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio

    2017-05-01

    WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.

  20. Principle component analysis and linear discriminant analysis of multi-spectral autofluorescence imaging data for differentiating basal cell carcinoma and healthy skin

    NASA Astrophysics Data System (ADS)

    Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Lesnichaya, Anastasiya D.; Kudrin, Konstantin G.; Cherkasova, Olga P.; Kurlov, Vladimir N.; Shikunova, Irina A.; Perchik, Alexei V.; Yurchenko, Stanislav O.; Reshetov, Igor V.

    2016-09-01

    In present paper, an ability to differentiate basal cell carcinoma (BCC) and healthy skin by combining multi-spectral autofluorescence imaging, principle component analysis (PCA), and linear discriminant analysis (LDA) has been demonstrated. For this purpose, the experimental setup, which includes excitation and detection branches, has been assembled. The excitation branch utilizes a mercury arc lamp equipped with a 365-nm narrow-linewidth excitation filter, a beam homogenizer, and a mechanical chopper. The detection branch employs a set of bandpass filters with the central wavelength of spectral transparency of λ = 400, 450, 500, and 550 nm, and a digital camera. The setup has been used to study three samples of freshly excised BCC. PCA and LDA have been implemented to analyze the data of multi-spectral fluorescence imaging. Observed results of this pilot study highlight the advantages of proposed imaging technique for skin cancer diagnosis.

  1. Differences in Multi-Modal Ultrasound Imaging between Triple Negative and Non-Triple Negative Breast Cancer.

    PubMed

    Li, Ziyao; Tian, Jiawei; Wang, Xiaowei; Wang, Ying; Wang, Zhenzhen; Zhang, Lei; Jing, Hui; Wu, Tong

    2016-04-01

    The objective of this study was to identify multi-modal ultrasound imaging parameters that could potentially help to differentiate between triple negative breast cancer (TNBC) and non-TNBC. Conventional ultrasonography, ultrasound strain elastography and 3-D ultrasound (3-D-US) findings from 50 TNBC and 179 non-TNBC patients were retrospectively reviewed. Immunohistochemical examination was used as the reference gold standard for cancer subtyping. Different ultrasound modalities were initially analyzed to define TNBC-related features. Subsequently, logistic regression analysis was applied to TNBC-related features to establish models for predicting TNBC. TNBCs often presented as micro-lobulated, markedly hypo-echoic masses with an abrupt interface (p = 0.015, 0.0015 and 0.004, compared with non-TNBCs, respectively) on conventional ultrasound, and showed a diminished retraction pattern phenomenon in the coronal plane (p = 0.035) on 3-D-US. Our findings suggest that B-mode ultrasound and 3-D-US in multi-modality ultrasonography could be a useful non-invasive technique for differentiating TNBCs from non-TNBCs. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  2. TU-FG-209-11: Validation of a Channelized Hotelling Observer to Optimize Chest Radiography Image Processing for Nodule Detection: A Human Observer Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez, A; Little, K; Chung, J

    Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidencemore » of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and more observers, including experienced chest radiologists.« less

  3. Homogenization-based interval analysis for structural-acoustic problem involving periodical composites and multi-scale uncertain-but-bounded parameters.

    PubMed

    Chen, Ning; Yu, Dejie; Xia, Baizhan; Liu, Jian; Ma, Zhengdong

    2017-04-01

    This paper presents a homogenization-based interval analysis method for the prediction of coupled structural-acoustic systems involving periodical composites and multi-scale uncertain-but-bounded parameters. In the structural-acoustic system, the macro plate structure is assumed to be composed of a periodically uniform microstructure. The equivalent macro material properties of the microstructure are computed using the homogenization method. By integrating the first-order Taylor expansion interval analysis method with the homogenization-based finite element method, a homogenization-based interval finite element method (HIFEM) is developed to solve a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters. The corresponding formulations of the HIFEM are deduced. A subinterval technique is also introduced into the HIFEM for higher accuracy. Numerical examples of a hexahedral box and an automobile passenger compartment are given to demonstrate the efficiency of the presented method for a periodical composite structural-acoustic system with multi-scale uncertain-but-bounded parameters.

  4. Experimental validation of a Monte-Carlo-based inversion scheme for 3D quantitative photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Buchmann, Jens; Kaplan, Bernhard A.; Prohaska, Steffen; Laufer, Jan

    2017-03-01

    Quantitative photoacoustic tomography (qPAT) aims to extract physiological parameters, such as blood oxygen saturation (sO2), from measured multi-wavelength image data sets. The challenge of this approach lies in the inherently nonlinear fluence distribution in the tissue, which has to be accounted for by using an appropriate model, and the large scale of the inverse problem. In addition, the accuracy of experimental and scanner-specific parameters, such as the wavelength dependence of the incident fluence, the acoustic detector response, the beam profile and divergence, needs to be considered. This study aims at quantitative imaging of blood sO2, as it has been shown to be a more robust parameter compared to absolute concentrations. We propose a Monte-Carlo-based inversion scheme in conjunction with a reduction in the number of variables achieved using image segmentation. The inversion scheme is experimentally validated in tissue-mimicking phantoms consisting of polymer tubes suspended in a scattering liquid. The tubes were filled with chromophore solutions at different concentration ratios. 3-D multi-spectral image data sets were acquired using a Fabry-Perot based PA scanner. A quantitative comparison of the measured data with the output of the forward model is presented. Parameter estimates of chromophore concentration ratios were found to be within 5 % of the true values.

  5. Heterogeneous Optimization Framework: Reproducible Preprocessing of Multi-Spectral Clinical MRI for Neuro-Oncology Imaging Research.

    PubMed

    Milchenko, Mikhail; Snyder, Abraham Z; LaMontagne, Pamela; Shimony, Joshua S; Benzinger, Tammie L; Fouke, Sarah Jost; Marcus, Daniel S

    2016-07-01

    Neuroimaging research often relies on clinically acquired magnetic resonance imaging (MRI) datasets that can originate from multiple institutions. Such datasets are characterized by high heterogeneity of modalities and variability of sequence parameters. This heterogeneity complicates the automation of image processing tasks such as spatial co-registration and physiological or functional image analysis. Given this heterogeneity, conventional processing workflows developed for research purposes are not optimal for clinical data. In this work, we describe an approach called Heterogeneous Optimization Framework (HOF) for developing image analysis pipelines that can handle the high degree of clinical data non-uniformity. HOF provides a set of guidelines for configuration, algorithm development, deployment, interpretation of results and quality control for such pipelines. At each step, we illustrate the HOF approach using the implementation of an automated pipeline for Multimodal Glioma Analysis (MGA) as an example. The MGA pipeline computes tissue diffusion characteristics of diffusion tensor imaging (DTI) acquisitions, hemodynamic characteristics using a perfusion model of susceptibility contrast (DSC) MRI, and spatial cross-modal co-registration of available anatomical, physiological and derived patient images. Developing MGA within HOF enabled the processing of neuro-oncology MR imaging studies to be fully automated. MGA has been successfully used to analyze over 160 clinical tumor studies to date within several research projects. Introduction of the MGA pipeline improved image processing throughput and, most importantly, effectively produced co-registered datasets that were suitable for advanced analysis despite high heterogeneity in acquisition protocols.

  6. A dimensionless dynamic contrast enhanced MRI parameter for intra-prostatic tumour target volume delineation: initial comparison with histology

    NASA Astrophysics Data System (ADS)

    Hrinivich, W. Thomas; Gibson, Eli; Gaed, Mena; Gomez, Jose A.; Moussa, Madeleine; McKenzie, Charles A.; Bauman, Glenn S.; Ward, Aaron D.; Fenster, Aaron; Wong, Eugene

    2014-03-01

    Purpose: T2 weighted and diffusion weighted magnetic resonance imaging (MRI) show promise in isolating prostate tumours. Dynamic contrast enhanced (DCE)-MRI has also been employed as a component in multi-parametric tumour detection schemes. Model-based parameters such as Ktrans are conventionally used to characterize DCE images and require arterial contrast agent (CR) concentration. A robust parameter map that does not depend on arterial input may be more useful for target volume delineation. We present a dimensionless parameter (Wio) that characterizes CR wash-in and washout rates without requiring arterial CR concentration. Wio is compared to Ktrans in terms of ability to discriminate cancer in the prostate, as demonstrated via comparison with histology. Methods: Three subjects underwent DCE-MRI using gadolinium contrast and 7 s imaging temporal resolution. A pathologist identified cancer on whole-mount histology specimens, and slides were deformably registered to MR images. The ability of Wio maps to discriminate cancer was determined through receiver operating characteristic curve (ROC) analysis. Results: There is a trend that Wio shows greater area under the ROC curve (AUC) than Ktrans with median AUC values of 0.74 and 0.69 respectively, but the difference was not statistically significant based on a Wilcoxon signed-rank test (p = 0.13). Conclusions: Preliminary results indicate that Wio shows potential as a tool for Ktrans QA, showing similar ability to discriminate cancer in the prostate as Ktrans without requiring arterial CR concentration.

  7. Improved optical flow motion estimation for digital image stabilization

    NASA Astrophysics Data System (ADS)

    Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao

    2015-11-01

    Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.

  8. Performance assessment of multi-frequency processing of ICU chest images for enhanced visualization of tubes and catheters

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Couwenhoven, Mary E.; Foos, David H.; Doran, James; Yankelevitz, David F.; Henschke, Claudia I.

    2008-03-01

    An image-processing method has been developed to improve the visibility of tube and catheter features in portable chest x-ray (CXR) images captured in the intensive care unit (ICU). The image-processing method is based on a multi-frequency approach, wherein the input image is decomposed into different spatial frequency bands, and those bands that contain the tube and catheter signals are individually enhanced by nonlinear boosting functions. Using a random sampling strategy, 50 cases were retrospectively selected for the study from a large database of portable CXR images that had been collected from multiple institutions over a two-year period. All images used in the study were captured using photo-stimulable, storage phosphor computed radiography (CR) systems. Each image was processed two ways. The images were processed with default image processing parameters such as those used in clinical settings (control). The 50 images were then separately processed using the new tube and catheter enhancement algorithm (test). Three board-certified radiologists participated in a reader study to assess differences in both detection-confidence performance and diagnostic efficiency between the control and test images. Images were evaluated on a diagnostic-quality, 3-megapixel monochrome monitor. Two scenarios were studied: the baseline scenario, representative of today's workflow (a single-control image presented with the window/level adjustments enabled) vs. the test scenario (a control/test image pair presented with a toggle enabled and the window/level settings disabled). The radiologists were asked to read the images in each scenario as they normally would for clinical diagnosis. Trend analysis indicates that the test scenario offers improved reading efficiency while providing as good or better detection capability compared to the baseline scenario.

  9. Fourier-Mellin moment-based intertwining map for image encryption

    NASA Astrophysics Data System (ADS)

    Kaur, Manjit; Kumar, Vijay

    2018-03-01

    In this paper, a robust image encryption technique that utilizes Fourier-Mellin moments and intertwining logistic map is proposed. Fourier-Mellin moment-based intertwining logistic map has been designed to overcome the issue of low sensitivity of an input image. Multi-objective Non-Dominated Sorting Genetic Algorithm (NSGA-II) based on Reinforcement Learning (MNSGA-RL) has been used to optimize the required parameters of intertwining logistic map. Fourier-Mellin moments are used to make the secret keys more secure. Thereafter, permutation and diffusion operations are carried out on input image using secret keys. The performance of proposed image encryption technique has been evaluated on five well-known benchmark images and also compared with seven well-known existing encryption techniques. The experimental results reveal that the proposed technique outperforms others in terms of entropy, correlation analysis, a unified average changing intensity and the number of changing pixel rate. The simulation results reveal that the proposed technique provides high level of security and robustness against various types of attacks.

  10. Joint explorative analysis of neuroreceptor subsystems in the human brain: application to receptor-transporter correlation using PET data.

    PubMed

    Cselényi, Zsolt; Lundberg, Johan; Halldin, Christer; Farde, Lars; Gulyás, Balázs

    2004-10-01

    Positron emission tomography (PET) has proved to be a highly successful technique in the qualitative and quantitative exploration of the human brain's neurotransmitter-receptor systems. In recent years, the number of PET radioligands, targeted to different neuroreceptor systems of the human brain, has increased considerably. This development paves the way for a simultaneous analysis of different receptor systems and subsystems in the same individual. The detailed exploration of the versatility of neuroreceptor systems requires novel technical approaches, capable of operating on huge parametric image datasets. An initial step of such explorative data processing and analysis should be the development of novel exploratory data-mining tools to gain insight into the "structure" of complex multi-individual, multi-receptor data sets. For practical reasons, a possible and feasible starting point of multi-receptor research can be the analysis of the pre- and post-synaptic binding sites of the same neurotransmitter. In the present study, we propose an unsupervised, unbiased data-mining tool for this task and demonstrate its usefulness by using quantitative receptor maps, obtained with positron emission tomography, from five healthy subjects on (pre-synaptic) serotonin transporters (5-HTT or SERT) and (post-synaptic) 5-HT(1A) receptors. Major components of the proposed technique include the projection of the input receptor maps to a feature space, the quasi-clustering and classification of projected data (neighbourhood formation), trans-individual analysis of neighbourhood properties (trajectory analysis), and the back-projection of the results of trajectory analysis to normal space (creation of multi-receptor maps). The resulting multi-receptor maps suggest that complex relationships and tendencies in the relationship between pre- and post-synaptic transporter-receptor systems can be revealed and classified by using this method. As an example, we demonstrate the regional correlation of the serotonin transporter-receptor systems. These parameter-specific multi-receptor maps can usefully guide the researchers in their endeavour to formulate models of multi-receptor interactions and changes in the human brain.

  11. Novel histopathologic feature identified through image analysis augments stage II colorectal cancer clinical reporting

    PubMed Central

    Caie, Peter D.; Zhou, Ying; Turnbull, Arran K.; Oniscu, Anca; Harrison, David J.

    2016-01-01

    A number of candidate histopathologic factors show promise in identifying stage II colorectal cancer (CRC) patients at a high risk of disease-specific death, however they can suffer from low reproducibility and none have replaced classical pathologic staging. We developed an image analysis algorithm which standardized the quantification of specific histopathologic features and exported a multi-parametric feature-set captured without bias. The image analysis algorithm was executed across a training set (n = 50) and the resultant big data was distilled through decision tree modelling to identify the most informative parameters to sub-categorize stage II CRC patients. The most significant, and novel, parameter identified was the ‘sum area of poorly differentiated clusters’ (AreaPDC). This feature was validated across a second cohort of stage II CRC patients (n = 134) (HR = 4; 95% CI, 1.5– 11). Finally, the AreaPDC was integrated with the significant features within the clinical pathology report, pT stage and differentiation, into a novel prognostic index (HR = 7.5; 95% CI, 3–18.5) which improved upon current clinical staging (HR = 4.26; 95% CI, 1.7– 10.3). The identification of poorly differentiated clusters as being highly significant in disease progression presents evidence to suggest that these features could be the source of novel targets to decrease the risk of disease specific death. PMID:27322148

  12. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2010-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also

  13. STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.

    PubMed

    Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X

    2009-08-01

    This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.

  14. The optimal algorithm for Multi-source RS image fusion.

    PubMed

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  15. Dynamic whole-body PET parametric imaging: I. Concept, acquisition protocol optimization and clinical application.

    PubMed

    Karakatsanis, Nicolas A; Lodge, Martin A; Tahari, Abdel K; Zhou, Y; Wahl, Richard L; Rahmim, Arman

    2013-10-21

    Static whole-body PET/CT, employing the standardized uptake value (SUV), is considered the standard clinical approach to diagnosis and treatment response monitoring for a wide range of oncologic malignancies. Alternative PET protocols involving dynamic acquisition of temporal images have been implemented in the research setting, allowing quantification of tracer dynamics, an important capability for tumor characterization and treatment response monitoring. Nonetheless, dynamic protocols have been confined to single-bed-coverage limiting the axial field-of-view to ~15-20 cm, and have not been translated to the routine clinical context of whole-body PET imaging for the inspection of disseminated disease. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. We investigate solutions to address the challenges of: (i) long acquisitions, (ii) small number of dynamic frames per bed, and (iii) non-invasive quantification of kinetics in the plasma. In the present study, a novel dynamic (4D) whole-body PET acquisition protocol of ~45 min total length is presented, composed of (i) an initial 6 min dynamic PET scan (24 frames) over the heart, followed by (ii) a sequence of multi-pass multi-bed PET scans (six passes × seven bed positions, each scanned for 45 s). Standard Patlak linear graphical analysis modeling was employed, coupled with image-derived plasma input function measurements. Ordinary least squares Patlak estimation was used as the baseline regression method to quantify the physiological parameters of tracer uptake rate Ki and total blood distribution volume V on an individual voxel basis. Extensive Monte Carlo simulation studies, using a wide set of published kinetic FDG parameters and GATE and XCAT platforms, were conducted to optimize the acquisition protocol from a range of ten different clinically acceptable sampling schedules examined. The framework was also applied to six FDG PET patient studies, demonstrating clinical feasibility. Both simulated and clinical results indicated enhanced contrast-to-noise ratios (CNRs) for Ki images in tumor regions with notable background FDG concentration, such as the liver, where SUV performed relatively poorly. Overall, the proposed framework enables enhanced quantification of physiological parameters across the whole body. In addition, the total acquisition length can be reduced from 45 to ~35 min and still achieve improved or equivalent CNR compared to SUV, provided the true Ki contrast is sufficiently high. In the follow-up companion paper, a set of advanced linear regression schemes is presented to particularly address the presence of noise, and attempt to achieve a better trade-off between the mean-squared error and the CNR metrics, resulting in enhanced task-based imaging.

  16. Dynamic whole body PET parametric imaging: I. Concept, acquisition protocol optimization and clinical application

    PubMed Central

    Karakatsanis, Nicolas A.; Lodge, Martin A.; Tahari, Abdel K.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman

    2013-01-01

    Static whole body PET/CT, employing the standardized uptake value (SUV), is considered the standard clinical approach to diagnosis and treatment response monitoring for a wide range of oncologic malignancies. Alternative PET protocols involving dynamic acquisition of temporal images have been implemented in the research setting, allowing quantification of tracer dynamics, an important capability for tumor characterization and treatment response monitoring. Nonetheless, dynamic protocols have been confined to single bed-coverage limiting the axial field-of-view to ~15–20 cm, and have not been translated to the routine clinical context of whole-body PET imaging for the inspection of disseminated disease. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. We investigate solutions to address the challenges of: (i) long acquisitions, (ii) small number of dynamic frames per bed, and (iii) non-invasive quantification of kinetics in the plasma. In the present study, a novel dynamic (4D) whole body PET acquisition protocol of ~45min total length is presented, composed of (i) an initial 6-min dynamic PET scan (24 frames) over the heart, followed by (ii) a sequence of multi-pass multi-bed PET scans (6 passes x 7 bed positions, each scanned for 45sec). Standard Patlak linear graphical analysis modeling was employed, coupled with image-derived plasma input function measurements. Ordinary least squares (OLS) Patlak estimation was used as the baseline regression method to quantify the physiological parameters of tracer uptake rate Ki and total blood distribution volume V on an individual voxel basis. Extensive Monte Carlo simulation studies, using a wide set of published kinetic FDG parameters and GATE and XCAT platforms, were conducted to optimize the acquisition protocol from a range of 10 different clinically acceptable sampling schedules examined. The framework was also applied to six FDG PET patient studies, demonstrating clinical feasibility. Both simulated and clinical results indicated enhanced contrast-to-noise ratios (CNRs) for Ki images in tumor regions with notable background FDG concentration, such as the liver, where SUV performed relatively poorly. Overall, the proposed framework enables enhanced quantification of physiological parameters across the whole-body. In addition, the total acquisition length can be reduced from 45min to ~35min and still achieve improved or equivalent CNR compared to SUV, provided the true Ki contrast is sufficiently high. In the follow-up companion paper, a set of advanced linear regression schemes is presented to particularly address the presence of noise, and attempt to achieve a better trade-off between the mean-squared error (MSE) and the CNR metrics, resulting in enhanced task-based imaging. PMID:24080962

  17. Dynamic whole-body PET parametric imaging: I. Concept, acquisition protocol optimization and clinical application

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Lodge, Martin A.; Tahari, Abdel K.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman

    2013-10-01

    Static whole-body PET/CT, employing the standardized uptake value (SUV), is considered the standard clinical approach to diagnosis and treatment response monitoring for a wide range of oncologic malignancies. Alternative PET protocols involving dynamic acquisition of temporal images have been implemented in the research setting, allowing quantification of tracer dynamics, an important capability for tumor characterization and treatment response monitoring. Nonetheless, dynamic protocols have been confined to single-bed-coverage limiting the axial field-of-view to ˜15-20 cm, and have not been translated to the routine clinical context of whole-body PET imaging for the inspection of disseminated disease. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. We investigate solutions to address the challenges of: (i) long acquisitions, (ii) small number of dynamic frames per bed, and (iii) non-invasive quantification of kinetics in the plasma. In the present study, a novel dynamic (4D) whole-body PET acquisition protocol of ˜45 min total length is presented, composed of (i) an initial 6 min dynamic PET scan (24 frames) over the heart, followed by (ii) a sequence of multi-pass multi-bed PET scans (six passes × seven bed positions, each scanned for 45 s). Standard Patlak linear graphical analysis modeling was employed, coupled with image-derived plasma input function measurements. Ordinary least squares Patlak estimation was used as the baseline regression method to quantify the physiological parameters of tracer uptake rate Ki and total blood distribution volume V on an individual voxel basis. Extensive Monte Carlo simulation studies, using a wide set of published kinetic FDG parameters and GATE and XCAT platforms, were conducted to optimize the acquisition protocol from a range of ten different clinically acceptable sampling schedules examined. The framework was also applied to six FDG PET patient studies, demonstrating clinical feasibility. Both simulated and clinical results indicated enhanced contrast-to-noise ratios (CNRs) for Ki images in tumor regions with notable background FDG concentration, such as the liver, where SUV performed relatively poorly. Overall, the proposed framework enables enhanced quantification of physiological parameters across the whole body. In addition, the total acquisition length can be reduced from 45 to ˜35 min and still achieve improved or equivalent CNR compared to SUV, provided the true Ki contrast is sufficiently high. In the follow-up companion paper, a set of advanced linear regression schemes is presented to particularly address the presence of noise, and attempt to achieve a better trade-off between the mean-squared error and the CNR metrics, resulting in enhanced task-based imaging.

  18. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  19. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features

    PubMed Central

    Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-01-01

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization. PMID:28599282

  20. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features.

    PubMed

    Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-07-18

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.

  1. Multi scales based sparse matrix spectral clustering image segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin

    2018-04-01

    In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.

  2. A global sensitivity analysis approach for morphogenesis models.

    PubMed

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  3. Enhancement of low visibility aerial images using histogram truncation and an explicit Retinex representation for balancing contrast and color consistency

    NASA Astrophysics Data System (ADS)

    Liu, Changjiang; Cheng, Irene; Zhang, Yi; Basu, Anup

    2017-06-01

    This paper presents an improved multi-scale Retinex (MSR) based enhancement for ariel images under low visibility. For traditional multi-scale Retinex, three scales are commonly employed, which limits its application scenarios. We extend our research to a general purpose enhanced method, and design an MSR with more than three scales. Based on the mathematical analysis and deductions, an explicit multi-scale representation is proposed that balances image contrast and color consistency. In addition, a histogram truncation technique is introduced as a post-processing strategy to remap the multi-scale Retinex output to the dynamic range of the display. Analysis of experimental results and comparisons with existing algorithms demonstrate the effectiveness and generality of the proposed method. Results on image quality assessment proves the accuracy of the proposed method with respect to both objective and subjective criteria.

  4. Time-resolved perfusion imaging at the angiography suite: preclinical comparison of a new flat-detector application to computed tomography perfusion.

    PubMed

    Jürgens, Julian H W; Schulz, Nadine; Wybranski, Christian; Seidensticker, Max; Streit, Sebastian; Brauner, Jan; Wohlgemuth, Walter A; Deuerling-Zheng, Yu; Ricke, Jens; Dudeck, Oliver

    2015-02-01

    The objective of this study was to compare the parameter maps of a new flat-panel detector application for time-resolved perfusion imaging in the angiography room (FD-CTP) with computed tomography perfusion (CTP) in an experimental tumor model. Twenty-four VX2 tumors were implanted into the hind legs of 12 rabbits. Three weeks later, FD-CTP (Artis zeego; Siemens) and CTP (SOMATOM Definition AS +; Siemens) were performed. The parameter maps for the FD-CTP were calculated using a prototype software, and those for the CTP were calculated with VPCT-body software on a dedicated syngo MultiModality Workplace. The parameters were compared using Pearson product-moment correlation coefficient and linear regression analysis. The Pearson product-moment correlation coefficient showed good correlation values for both the intratumoral blood volume of 0.848 (P < 0.01) and the blood flow of 0.698 (P < 0.01). The linear regression analysis of the perfusion between FD-CTP and CTP showed for the blood volume a regression equation y = 4.44x + 36.72 (P < 0.01) and for the blood flow y = 0.75x + 14.61 (P < 0.01). This preclinical study provides evidence that FD-CTP allows a time-resolved (dynamic) perfusion imaging of tumors similar to CTP, which provides the basis for clinical applications such as the assessment of tumor response to locoregional therapies directly in the angiography suite.

  5. Development of a fusion approach selection tool

    NASA Astrophysics Data System (ADS)

    Pohl, C.; Zeng, Y.

    2015-06-01

    During the last decades number and quality of available remote sensing satellite sensors for Earth observation has grown significantly. The amount of available multi-sensor images along with their increased spatial and spectral resolution provides new challenges to Earth scientists. With a Fusion Approach Selection Tool (FAST) the remote sensing community would obtain access to an optimized and improved image processing technology. Remote sensing image fusion is a mean to produce images containing information that is not inherent in the single image alone. In the meantime the user has access to sophisticated commercialized image fusion techniques plus the option to tune the parameters of each individual technique to match the anticipated application. This leaves the operator with an uncountable number of options to combine remote sensing images, not talking about the selection of the appropriate images, resolution and bands. Image fusion can be a machine and time-consuming endeavour. In addition it requires knowledge about remote sensing, image fusion, digital image processing and the application. FAST shall provide the user with a quick overview of processing flows to choose from to reach the target. FAST will ask for available images, application parameters and desired information to process this input to come out with a workflow to quickly obtain the best results. It will optimize data and image fusion techniques. It provides an overview on the possible results from which the user can choose the best. FAST will enable even inexperienced users to use advanced processing methods to maximize the benefit of multi-sensor image exploitation.

  6. Long-Term RST Analysis of Anomalous TIR Sequences in Relation with Earthquakes Occurred in Greece in the Period 2004-2013

    NASA Astrophysics Data System (ADS)

    Eleftheriou, Alexander; Filizzola, Carolina; Genzano, Nicola; Lacava, Teodosio; Lisi, Mariano; Paciello, Rossana; Pergola, Nicola; Vallianatos, Filippos; Tramutoli, Valerio

    2016-01-01

    Real-time integration of multi-parametric observations is expected to accelerate the process toward improved, and operationally more effective, systems for time-Dependent Assessment of Seismic Hazard (t-DASH) and earthquake short-term (from days to weeks) forecast. However, a very preliminary step in this direction is the identification of those parameters (chemical, physical, biological, etc.) whose anomalous variations can be, to some extent, associated with the complex process of preparation for major earthquakes. In this paper one of these parameters (the Earth's emitted radiation in the Thermal InfraRed spectral region) is considered for its possible correlation with M ≥ 4 earthquakes occurred in Greece in between 2004 and 2013. The Robust Satellite Technique (RST) data analysis approach and Robust Estimator of TIR Anomalies (RETIRA) index were used to preliminarily define, and then to identify, significant sequences of TIR anomalies (SSTAs) in 10 years (2004-2013) of daily TIR images acquired by the Spinning Enhanced Visible and Infrared Imager on board the Meteosat Second Generation satellite. Taking into account the physical models proposed for justifying the existence of a correlation among TIR anomalies and earthquake occurrences, specific validation rules (in line with the ones used by the Collaboratory for the Study of Earthquake Predictability—CSEP—Project) have been defined to drive a retrospective correlation analysis process. The analysis shows that more than 93 % of all identified SSTAs occur in the prefixed space-time window around ( M ≥ 4) earthquake's time and location of occurrence with a false positive rate smaller than 7 %. Molchan error diagram analysis shows that such a correlation is far to be achievable by chance notwithstanding the huge amount of missed events due to frequent space/time data gaps produced by the presence of clouds over the scene. Achieved results, and particularly the very low rate of false positives registered on a so long testing period, seems already sufficient (at least) to qualify TIR anomalies (identified by RST approach and RETIRA index) among the parameters to be considered in the framework of a multi-parametric approach to t-DASH.

  7. Multi-contrast MRI registration of carotid arteries based on cross-sectional images and lumen boundaries

    NASA Astrophysics Data System (ADS)

    Wu, Yu-Xia; Zhang, Xi; Xu, Xiao-Pan; Liu, Yang; Zhang, Guo-Peng; Li, Bao-Juan; Chen, Hui-Jun; Lu, Hong-Bing

    2017-02-01

    Ischemic stroke has great correlation with carotid atherosclerosis and is mostly caused by vulnerable plaques. It's particularly important to analysis the components of plaques for the detection of vulnerable plaques. Recently plaque analysis based on multi-contrast magnetic resonance imaging has attracted great attention. Though multi-contrast MR imaging has potentials in enhanced demonstration of carotid wall, its performance is hampered by the misalignment of different imaging sequences. In this study, a coarse-to-fine registration strategy based on cross-sectional images and wall boundaries is proposed to solve the problem. It includes two steps: a rigid step using the iterative closest points to register the centerlines of carotid artery extracted from multi-contrast MR images, and a non-rigid step using the thin plate spline to register the lumen boundaries of carotid artery. In the rigid step, the centerline was extracted by tracking the crosssectional images along the vessel direction calculated by Hessian matrix. In the non-rigid step, a shape context descriptor is introduced to find corresponding points of two similar boundaries. In addition, the deterministic annealing technique is used to find a globally optimized solution. The proposed strategy was evaluated by newly developed three-dimensional, fast and high resolution multi-contrast black blood MR imaging. Quantitative validation indicated that after registration, the overlap of two boundaries from different sequences is 95%, and their mean surface distance is 0.12 mm. In conclusion, the proposed algorithm has improved the accuracy of registration effectively for further component analysis of carotid plaques.

  8. Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.

    PubMed

    Steimers, A; Farnung, W; Kohl-Bareis, M

    2016-01-01

    We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code.

  9. Preliminary Evaluation of a Commercial 360 Multi-Camera Rig for Photogrammetric Purposes

    NASA Astrophysics Data System (ADS)

    Teppati Losè, L.; Chiabrando, F.; Spanò, A.

    2018-05-01

    The research presented in this paper is focused on a preliminary evaluation of a 360 multi-camera rig: the possibilities to use the images acquired by the system in a photogrammetric workflow and for the creation of spherical images are investigated and different tests and analyses are reported. Particular attention is dedicated to different operative approaches for the estimation of the interior orientation parameters of the cameras, both from an operative and theoretical point of view. The consistency of the six cameras that compose the 360 system was in depth analysed adopting a self-calibration approach in a commercial photogrammetric software solution. A 3D calibration field was projected and created, and several topographic measurements were performed in order to have a set of control points to enhance and control the photogrammetric process. The influence of the interior parameters of the six cameras were analyse both in the different phases of the photogrammetric workflow (reprojection errors on the single tie point, dense cloud generation, geometrical description of the surveyed object, etc.), both in the stitching of the different images into a single spherical panorama (some consideration on the influence of the camera parameters on the overall quality of the spherical image are reported also in these section).

  10. Multi-focus image fusion with the all convolutional neural network

    NASA Astrophysics Data System (ADS)

    Du, Chao-ben; Gao, She-sheng

    2018-01-01

    A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.

  11. Monitoring dynamic reactions of red blood cells to UHF electromagnetic waves radiation using a novel micro-imaging technology.

    PubMed

    Ruan, Ping; Yong, Junguang; Shen, Hongtao; Zheng, Xianrong

    2012-12-01

    Multiple state-of-the-art techniques, such as multi-dimensional micro-imaging, fast multi-channel micro-spetrophotometry, and dynamic micro-imaging analysis, were used to dynamically investigate various effects of cell under the 900 MHz electromagnetic radiation. Cell changes in shape, size, and parameters of Hb absorption spectrum under different power density electromagnetic waves radiation were presented in this article. Experimental results indicated that the isolated human red blood cells (RBCs) do not have obviously real-time responses to the ultra-low density (15 μW/cm(2), 31 μW/cm(2)) electromagnetic wave radiation when the radiation time is not more than 30 min; however, the cells do have significant reactions in shape, size, and the like, to the electromagnetic waves radiation with power densities of 1 mW/cm(2) and 5 mW/cm(2). The data also reveal the possible influences and statistical relationships among living human cell functions, radiation amount, and exposure time with high-frequency electromagnetic waves. The results of this study may be significant on protection of human being and other living organisms against possible radiation affections of the high-frequency electromagnetic waves.

  12. Change Detection of High-Resolution Remote Sensing Images Based on Adaptive Fusion of Multiple Features

    NASA Astrophysics Data System (ADS)

    Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.

    2018-04-01

    In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.

  13. An extraction algorithm of pulmonary fissures from multislice CT image

    NASA Astrophysics Data System (ADS)

    Tachibana, Hiroyuki; Saita, Shinsuke; Yasutomo, Motokatsu; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Sasagawa, Michizo; Eguchi, Kenji; Moriyama, Noriyuki

    2005-04-01

    Aging and smoking history increases number of pulmonary emphysema. Alveoli restoration destroyed by pulmonary emphysema is difficult and early direction is important. Multi-slice CT technology has been improving 3-D image analysis with higher body axis resolution and shorter scan time. And low-dose high accuracy scanning becomes available. Multi-slice CT image helps physicians with accurate measuring but huge volume of the image data takes time and cost. This paper is intended for computer added emphysema region analysis and proves effectiveness of proposed algorithm.

  14. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    PubMed

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Development and bench testing of a multi-spectral imaging technology built on a smartphone platform

    NASA Astrophysics Data System (ADS)

    Bolton, Frank J.; Weiser, Reuven; Kass, Alex J.; Rose, Donny; Safir, Amit; Levitz, David

    2016-03-01

    Cervical cancer screening presents a great challenge for clinicians across the developing world. In many countries, cervical cancer screening is done by visualization with the naked eye. Simple brightfield white light imaging with photo documentation has been shown to make a significant impact on cervical cancer care. Adoption of smartphone based cervical imaging devices is increasing across Africa. However, advanced imaging technologies such as multispectral imaging systems, are seldom deployed in low resource settings, where they are needed most. To address this challenge, the optical system of a smartphone-based mobile colposcopy imaging system was refined, integrating components required for low cost, portable multi-spectral imaging of the cervix. This paper describes the refinement of the mobile colposcope to enable it to acquire images of the cervix at multiple illumination wavelengths, including modeling and laboratory testing. Wavelengths were selected to enable quantifying the main absorbers in tissue (oxyand deoxy-hemoglobin, and water), as well as scattering parameters that describe the size distribution of scatterers. The necessary hardware and software modifications are reviewed. Initial testing suggests the multi-spectral mobile device holds promise for use in low-resource settings.

  16. Feature-based registration of historical aerial images by Area Minimization

    NASA Astrophysics Data System (ADS)

    Nagarajan, Sudhagar; Schenk, Toni

    2016-06-01

    The registration of historical images plays a significant role in assessing changes in land topography over time. By comparing historical aerial images with recent data, geometric changes that have taken place over the years can be quantified. However, the lack of ground control information and precise camera parameters has limited scientists' ability to reliably incorporate historical images into change detection studies. Other limitations include the methods of determining identical points between recent and historical images, which has proven to be a cumbersome task due to continuous land cover changes. Our research demonstrates a method of registering historical images using Time Invariant Line (TIL) features. TIL features are different representations of the same line features in multi-temporal data without explicit point-to-point or straight line-to-straight line correspondence. We successfully determined the exterior orientation of historical images by minimizing the area formed between corresponding TIL features in recent and historical images. We then tested the feasibility of the approach with synthetic and real data and analyzed the results. Based on our analysis, this method shows promise for long-term 3D change detection studies.

  17. Comparison of fan beam, slit-slat and multi-pinhole collimators for molecular breast tomosynthesis.

    PubMed

    van Roosmalen, Jarno; Beekman, Freek J; Goorden, Marlies C

    2018-05-16

    Recently, we proposed and optimized dedicated multi-pinhole molecular breast tomosynthesis (MBT) that images a lightly compressed breast. As MBT may also be performed with other types of collimators, the aim of this paper is to optimize MBT with fan beam and slit-slat collimators and to compare its performance to that of multi-pinhole MBT to arrive at a truly optimized design. Using analytical expressions, we first optimized fan beam and slit-slat collimator parameters to reach maximum sensitivity at a series of given system resolutions. Additionally, we performed full system simulations of a breast phantom containing several tumours for the optimized designs. We found that at equal system resolution the maximum achievable sensitivity increases from pinhole to slit-slat to fan beam collimation with fan beam and slit-slat MBT having on average a 48% and 20% higher sensitivity than multi-pinhole MBT. Furthermore, by inspecting simulated images and applying a tumour-to-background contrast-to-noise (TB-CNR) analysis, we found that slit-slat collimators underperform with respect to the other collimator types. The fan beam collimators obtained a similar TB-CNR as the pinhole collimators, but the optimum was reached at different system resolutions. For fan beam collimators, a 6-8 mm system resolution was optimal in terms of TB-CNR, while with pinhole collimation highest TB-CNR was reached in the 7-10 mm range.

  18. The feasibility of manual parameter tuning for deformable breast MR image registration from a multi-objective optimization perspective.

    PubMed

    Pirpinia, Kleopatra; Bosman, Peter A N; Loo, Claudette E; Winter-Warnars, Gonneke; Janssen, Natasja N Y; Scholten, Astrid N; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja

    2017-06-23

    Deformable image registration is typically formulated as an optimization problem involving a linearly weighted combination of terms that correspond to objectives of interest (e.g. similarity, deformation magnitude). The weights, along with multiple other parameters, need to be manually tuned for each application, a task currently addressed mainly via trial-and-error approaches. Such approaches can only be successful if there is a sensible interplay between parameters, objectives, and desired registration outcome. This, however, is not well established. To study this interplay, we use multi-objective optimization, where multiple solutions exist that represent the optimal trade-offs between the objectives, forming a so-called Pareto front. Here, we focus on weight tuning. To study the space a user has to navigate during manual weight tuning, we randomly sample multiple linear combinations. To understand how these combinations relate to desirability of registration outcome, we associate with each outcome a mean target registration error (TRE) based on expert-defined anatomical landmarks. Further, we employ a multi-objective evolutionary algorithm that optimizes the weight combinations, yielding a Pareto front of solutions, which can be directly navigated by the user. To study how the complexity of manual weight tuning changes depending on the registration problem, we consider an easy problem, prone-to-prone breast MR image registration, and a hard problem, prone-to-supine breast MR image registration. Lastly, we investigate how guidance information as an additional objective influences the prone-to-supine registration outcome. Results show that the interplay between weights, objectives, and registration outcome makes manual weight tuning feasible for the prone-to-prone problem, but very challenging for the harder prone-to-supine problem. Here, patient-specific, multi-objective weight optimization is needed, obtaining a mean TRE of 13.6 mm without guidance information reduced to 7.3 mm with guidance information, but also providing a Pareto front that exhibits an intuitively sensible interplay between weights, objectives, and registration outcome, allowing outcome selection.

  19. The feasibility of manual parameter tuning for deformable breast MR image registration from a multi-objective optimization perspective

    NASA Astrophysics Data System (ADS)

    Pirpinia, Kleopatra; Bosman, Peter A. N.; E Loo, Claudette; Winter-Warnars, Gonneke; Y Janssen, Natasja N.; Scholten, Astrid N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja

    2017-07-01

    Deformable image registration is typically formulated as an optimization problem involving a linearly weighted combination of terms that correspond to objectives of interest (e.g. similarity, deformation magnitude). The weights, along with multiple other parameters, need to be manually tuned for each application, a task currently addressed mainly via trial-and-error approaches. Such approaches can only be successful if there is a sensible interplay between parameters, objectives, and desired registration outcome. This, however, is not well established. To study this interplay, we use multi-objective optimization, where multiple solutions exist that represent the optimal trade-offs between the objectives, forming a so-called Pareto front. Here, we focus on weight tuning. To study the space a user has to navigate during manual weight tuning, we randomly sample multiple linear combinations. To understand how these combinations relate to desirability of registration outcome, we associate with each outcome a mean target registration error (TRE) based on expert-defined anatomical landmarks. Further, we employ a multi-objective evolutionary algorithm that optimizes the weight combinations, yielding a Pareto front of solutions, which can be directly navigated by the user. To study how the complexity of manual weight tuning changes depending on the registration problem, we consider an easy problem, prone-to-prone breast MR image registration, and a hard problem, prone-to-supine breast MR image registration. Lastly, we investigate how guidance information as an additional objective influences the prone-to-supine registration outcome. Results show that the interplay between weights, objectives, and registration outcome makes manual weight tuning feasible for the prone-to-prone problem, but very challenging for the harder prone-to-supine problem. Here, patient-specific, multi-objective weight optimization is needed, obtaining a mean TRE of 13.6 mm without guidance information reduced to 7.3 mm with guidance information, but also providing a Pareto front that exhibits an intuitively sensible interplay between weights, objectives, and registration outcome, allowing outcome selection.

  20. Multi-frame super-resolution with quality self-assessment for retinal fundus videos.

    PubMed

    Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P

    2014-01-01

    This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.

  1. Robust and Accurate Image-Based Georeferencing Exploiting Relative Orientation Constraints

    NASA Astrophysics Data System (ADS)

    Cavegn, S.; Blaser, S.; Nebiker, S.; Haala, N.

    2018-05-01

    Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2-3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.

  2. Rice crop growth monitoring using ENVISAT-1/ASAR AP mode

    NASA Astrophysics Data System (ADS)

    Konishi, Tomohisa; Suga, Yuzo; Omatu, Shigeru; Takeuchi, Shoji; Asonuma, Kazuyoshi

    2007-10-01

    Hiroshima Institute of Technology (HIT) is operating the direct down-links of microwave and optical earth observation satellite data in Japan. This study focuses on the validation for rice crop monitoring using microwave remotely sensed image data acquired by ENIVISAT-1 referring to ground truth data such as height of rice crop, vegetation cover rate and leaf area index in the test sites of Hiroshima district, the western part of Japan. ENVISAT-1/ASAR data has the capabilities for the monitoring of the rice crop growing cycle by using alternating cross polarization mode images. However, ASAR data is influenced by several parameters such as land cover structure, direction and alignment of rice crop fields in the test sites. In this study, the validation was carried out to be combined with microwave image data and ground truth data regarding rice crop fields to investigate the above parameters. Multi-temporal, multi-direction (descending and ascending) and multi-angle ASAR alternating cross polarization mode images were used to investigate during the rice crop growing cycle. On the other hand, LANDSAT-7/ETM+ data were used to detect land cover structure, direction and alignment of rice crop fields corresponding to the backscatter of ASAR. Finally, the extraction of rice planted area was attempted by using multi-temporal ASAR AP mode data such as VV/VH and HH/HV. As the result of this study, it is clear that the estimated rice planted area coincides with the existing statistical data for area of the rice crop field. In addition, HH/HV is more effective than VV/VH in the rice planted area extraction.

  3. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  4. Dem Reconstruction Using Light Field and Bidirectional Reflectance Function from Multi-View High Resolution Spatial Images

    NASA Astrophysics Data System (ADS)

    de Vieilleville, F.; Ristorcelli, T.; Delvit, J.-M.

    2016-06-01

    This paper presents a method for dense DSM reconstruction from high resolution, mono sensor, passive imagery, spatial panchromatic image sequence. The interest of our approach is four-fold. Firstly, we extend the core of light field approaches using an explicit BRDF model from the Image Synthesis community which is more realistic than the Lambertian model. The chosen model is the Cook-Torrance BRDF which enables us to model rough surfaces with specular effects using specific material parameters. Secondly, we extend light field approaches for non-pinhole sensors and non-rectilinear motion by using a proper geometric transformation on the image sequence. Thirdly, we produce a 3D volume cost embodying all the tested possible heights and filter it using simple methods such as Volume Cost Filtering or variational optimal methods. We have tested our method on a Pleiades image sequence on various locations with dense urban buildings and report encouraging results with respect to classic multi-label methods such as MIC-MAC, or more recent pipelines such as S2P. Last but not least, our method also produces maps of material parameters on the estimated points, allowing us to simplify building classification or road extraction.

  5. Multi-objective calibration and uncertainty analysis of hydrologic models; A comparative study between formal and informal methods

    NASA Astrophysics Data System (ADS)

    Shafii, M.; Tolson, B.; Matott, L. S.

    2012-04-01

    Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.

  6. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-11-01

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Synthesis multi-projector content for multi-projector three dimension display using a layered representation

    NASA Astrophysics Data System (ADS)

    Qin, Chen; Ren, Bin; Guo, Longfei; Dou, Wenhua

    2014-11-01

    Multi-projector three dimension display is a promising multi-view glass-free three dimension (3D) display technology, can produce full colour high definition 3D images on its screen. One key problem of multi-projector 3D display is how to acquire the source images of projector array while avoiding pseudoscopic problem. This paper analysis the displaying characteristics of multi-projector 3D display first and then propose a projector content synthetic method using tetrahedral transform. A 3D video format that based on stereo image pair and associated disparity map is presented, it is well suit for any type of multi-projector 3D display and has advantage in saving storage usage. Experiment results show that our method solved the pseudoscopic problem.

  8. REFLECTION OF PROPAGATING SLOW MAGNETO-ACOUSTIC WAVES IN HOT CORONAL LOOPS: MULTI-INSTRUMENT OBSERVATIONS AND NUMERICAL MODELING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandal, Sudip; Banerjee, Dipankar; Pant, Vaibhav

    Slow MHD waves are important tools for understanding coronal structures and dynamics. In this paper, we report a number of observations from the X-Ray Telescope (XRT) on board HINODE and Solar Dynamic Observatory /Atmospheric Imaging Assembly (AIA) of reflecting longitudinal waves in hot coronal loops. To our knowledge, this is the first report of this kind as seen from the XRT and simultaneously with the AIA. The wave appears after a micro-flare occurs at one of the footpoints. We estimate the density and temperature of the loop plasma by performing differential emission measure (DEM) analysis on the AIA image sequence.more » The estimated speed of propagation is comparable to or lower than the local sound speed, suggesting it to be a propagating slow wave. The intensity perturbation amplitude, in every case, falls very rapidly as the perturbation moves along the loop and eventually vanishes after one or more reflections. To check the consistency of such reflection signatures with the obtained loop parameters, we perform a 2.5D MHD simulation, which uses the parameters obtained from our observation as inputs, and perform forward modeling to synthesize AIA 94 Å images. Analyzing the synthesized images, we obtain the same properties of the observables as for the real observation. From the analysis we conclude that a footpoint heating can generate a slow wave which then reflects back and forth in the coronal loop before fading. Our analysis of the simulated data shows that the main agent for this damping is anisotropic thermal conduction.« less

  9. An efficient multi-resolution GA approach to dental image alignment

    NASA Astrophysics Data System (ADS)

    Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany

    2006-02-01

    Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.

  10. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  11. Deep Learning with Hierarchical Convolutional Factor Analysis

    PubMed Central

    Chen, Bo; Polatkan, Gungor; Sapiro, Guillermo; Blei, David; Dunson, David; Carin, Lawrence

    2013-01-01

    Unsupervised multi-layered (“deep”) models are considered for general data, with a particular focus on imagery. The model is represented using a hierarchical convolutional factor-analysis construction, with sparse factor loadings and scores. The computation of layer-dependent model parameters is implemented within a Bayesian setting, employing a Gibbs sampler and variational Bayesian (VB) analysis, that explicitly exploit the convolutional nature of the expansion. In order to address large-scale and streaming data, an online version of VB is also developed. The number of basis functions or dictionary elements at each layer is inferred from the data, based on a beta-Bernoulli implementation of the Indian buffet process. Example results are presented for several image-processing applications, with comparisons to related models in the literature. PMID:23787342

  12. Structural scene analysis and content-based image retrieval applied to bone age assessment

    NASA Astrophysics Data System (ADS)

    Fischer, Benedikt; Brosig, André; Deserno, Thomas M.; Ott, Bastian; Günther, Rolf W.

    2009-02-01

    Radiological bone age assessment is based on global or local image regions of interest (ROI), such as epiphyseal regions or the area of carpal bones. Usually, these regions are compared to a standardized reference and a score determining the skeletal maturity is calculated. For computer-assisted diagnosis, automatic ROI extraction is done so far by heuristic approaches. In this work, we apply a high-level approach of scene analysis for knowledge-based ROI segmentation. Based on a set of 100 reference images from the IRMA database, a so called structural prototype (SP) is trained. In this graph-based structure, the 14 phalanges and 5 metacarpal bones are represented by nodes, with associated location, shape, as well as texture parameters modeled by Gaussians. Accordingly, the Gaussians describing the relative positions, relative orientation, and other relative parameters between two nodes are associated to the edges. Thereafter, segmentation of a hand radiograph is done in several steps: (i) a multi-scale region merging scheme is applied to extract visually prominent regions; (ii) a graph/sub-graph matching to the SP robustly identifies a subset of the 19 bones; (iii) the SP is registered to the current image for complete scene-reconstruction (iv) the epiphyseal regions are extracted from the reconstructed scene. The evaluation is based on 137 images of Caucasian males from the USC hand atlas. Overall, an error rate of 32% is achieved, for the 6 middle distal and medial/distal epiphyses, 23% of all extractions need adjustments. On average 9.58 of the 14 epiphyseal regions were extracted successfully per image. This is promising for further use in content-based image retrieval (CBIR) and CBIR-based automatic bone age assessment.

  13. HoloMonitor M4: holographic imaging cytometer for real-time kinetic label-free live-cell analysis of adherent cells

    NASA Astrophysics Data System (ADS)

    Sebesta, Mikael; Egelberg, Peter J.; Langberg, Anders; Lindskov, Jens-Henrik; Alm, Kersti; Janicke, Birgit

    2016-03-01

    Live-cell imaging enables studying dynamic cellular processes that cannot be visualized in fixed-cell assays. An increasing number of scientists in academia and the pharmaceutical industry are choosing live-cell analysis over or in addition to traditional fixed-cell assays. We have developed a time-lapse label-free imaging cytometer HoloMonitorM4. HoloMonitor M4 assists researchers to overcome inherent disadvantages of fluorescent analysis, specifically effects of chemical labels or genetic modifications which can alter cellular behavior. Additionally, label-free analysis is simple and eliminates the costs associated with staining procedures. The underlying technology principle is based on digital off-axis holography. While multiple alternatives exist for this type of analysis, we prioritized our developments to achieve the following: a) All-inclusive system - hardware and sophisticated cytometric analysis software; b) Ease of use enabling utilization of instrumentation by expert- and entrylevel researchers alike; c) Validated quantitative assay end-points tracked over time such as optical path length shift, optical volume and multiple derived imaging parameters; d) Reliable digital autofocus; e) Robust long-term operation in the incubator environment; f) High throughput and walk-away capability; and finally g) Data management suitable for single- and multi-user networks. We provide examples of HoloMonitor applications of label-free cell viability measurements and monitoring of cell cycle phase distribution.

  14. Optimization of medical imaging display systems: using the channelized Hotelling observer for detecting lung nodules: experimental study

    NASA Astrophysics Data System (ADS)

    Platisa, Ljiljana; Vansteenkiste, Ewout; Goossens, Bart; Marchessoux, Cédric; Kimpe, Tom; Philips, Wilfried

    2009-02-01

    Medical-imaging systems are designed to aid medical specialists in a specific task. Therefore, the physical parameters of a system need to optimize the task performance of a human observer. This requires measurements of human performance in a given task during the system optimization. Typically, psychophysical studies are conducted for this purpose. Numerical observer models have been successfully used to predict human performance in several detection tasks. Especially, the task of signal detection using a channelized Hotelling observer (CHO) in simulated images has been widely explored. However, there are few studies done for clinically acquired images that also contain anatomic noise. In this paper, we investigate the performance of a CHO in the task of detecting lung nodules in real radiographic images of the chest. To evaluate variability introduced by the limited available data, we employ a commonly used study of a multi-reader multi-case (MRMC) scenario. It accounts for both case and reader variability. Finally, we use the "oneshot" methods to estimate the MRMC variance of the area under the ROC curve (AUC). The obtained AUC compares well to those reported for human observer study on a similar data set. Furthermore, the "one-shot" analysis implies a fairly consistent performance of the CHO with the variance of AUC below 0.002. This indicates promising potential for numerical observers in optimization of medical imaging displays and encourages further investigation on the subject.

  15. Image processing and machine learning for fully automated probabilistic evaluation of medical images.

    PubMed

    Sajn, Luka; Kukar, Matjaž

    2011-12-01

    The paper presents results of our long-term study on using image processing and data mining methods in a medical imaging. Since evaluation of modern medical images is becoming increasingly complex, advanced analytical and decision support tools are involved in integration of partial diagnostic results. Such partial results, frequently obtained from tests with substantial imperfections, are integrated into ultimate diagnostic conclusion about the probability of disease for a given patient. We study various topics such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform medical practice. Our long-term study reveals three significant milestones. The first improvement was achieved by significantly increasing post-test diagnostic probabilities with respect to expert physicians. The second, even more significant improvement utilizes multi-resolution image parametrization. Machine learning methods in conjunction with the feature subset selection on these parameters significantly improve diagnostic performance. However, further feature construction with the principle component analysis on these features elevates results to an even higher accuracy level that represents the third milestone. With the proposed approach clinical results are significantly improved throughout the study. The most significant result of our study is improvement in the diagnostic power of the whole diagnostic process. Our compound approach aids, but does not replace, the physician's judgment and may assist in decisions on cost effectiveness of tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  16. Generating Mosaics of Astronomical Images

    NASA Technical Reports Server (NTRS)

    Bergou, Attila; Berriman, Bruce; Good, John; Jacob, Joseph; Katz, Daniel; Laity, Anastasia; Prince, Thomas; Williams, Roy

    2005-01-01

    "Montage" is the name of a service of the National Virtual Observatory (NVO), and of software being developed to implement the service via the World Wide Web. Montage generates science-grade custom mosaics of astronomical images on demand from input files that comply with the Flexible Image Transport System (FITS) standard and contain image data registered on projections that comply with the World Coordinate System (WCS) standards. "Science-grade" in this context signifies that terrestrial and instrumental features are removed from images in a way that can be described quantitatively. "Custom" refers to user-specified parameters of projection, coordinates, size, rotation, and spatial sampling. The greatest value of Montage is expected to lie in its ability to analyze images at multiple wavelengths, delivering them on a common projection, coordinate system, and spatial sampling, and thereby enabling further analysis as though they were part of a single, multi-wavelength image. Montage will be deployed as a computation-intensive service through existing astronomy portals and other Web sites. It will be integrated into the emerging NVO architecture and will be executed on the TeraGrid. The Montage software will also be portable and publicly available.

  17. Multi-Stage System for Automatic Target Recognition

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Lu, Thomas T.; Ye, David; Edens, Weston; Johnson, Oliver

    2010-01-01

    A multi-stage automated target recognition (ATR) system has been designed to perform computer vision tasks with adequate proficiency in mimicking human vision. The system is able to detect, identify, and track targets of interest. Potential regions of interest (ROIs) are first identified by the detection stage using an Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter combined with a wavelet transform. False positives are then eliminated by the verification stage using feature extraction methods in conjunction with neural networks. Feature extraction transforms the ROIs using filtering and binning algorithms to create feature vectors. A feedforward back-propagation neural network (NN) is then trained to classify each feature vector and to remove false positives. The system parameter optimizations process has been developed to adapt to various targets and datasets. The objective was to design an efficient computer vision system that can learn to detect multiple targets in large images with unknown backgrounds. Because the target size is small relative to the image size in this problem, there are many regions of the image that could potentially contain the target. A cursory analysis of every region can be computationally efficient, but may yield too many false positives. On the other hand, a detailed analysis of every region can yield better results, but may be computationally inefficient. The multi-stage ATR system was designed to achieve an optimal balance between accuracy and computational efficiency by incorporating both models. The detection stage first identifies potential ROIs where the target may be present by performing a fast Fourier domain OT-MACH filter-based correlation. Because threshold for this stage is chosen with the goal of detecting all true positives, a number of false positives are also detected as ROIs. The verification stage then transforms the regions of interest into feature space, and eliminates false positives using an artificial neural network classifier. The multi-stage system allows tuning the detection sensitivity and the identification specificity individually in each stage. It is easier to achieve optimized ATR operation based on its specific goal. The test results show that the system was successful in substantially reducing the false positive rate when tested on a sonar and video image datasets.

  18. Multispectral image fusion for target detection

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-09-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  19. Kernel-aligned multi-view canonical correlation analysis for image recognition

    NASA Astrophysics Data System (ADS)

    Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao

    2016-09-01

    Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.

  20. Cross contrast multi-channel image registration using image synthesis for MR brain images.

    PubMed

    Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L

    2017-02-01

    Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Multispectral, Fluorescent and Photoplethysmographic Imaging for Remote Skin Assessment

    PubMed Central

    Spigulis, Janis

    2017-01-01

    Optical tissue imaging has several advantages over the routine clinical imaging methods, including non-invasiveness (it does not change the structure of tissues), remote operation (it avoids infections) and the ability to quantify the tissue condition by means of specific image parameters. Dermatologists and other skin experts need compact (preferably pocket-size), self-sustaining and easy-to-use imaging devices. The operational principles and designs of ten portable in-vivo skin imaging prototypes developed at the Biophotonics Laboratory of Institute of Atomic Physics and Spectroscopy, University of Latvia during the recent five years are presented in this paper. Four groups of imaging devices are considered. Multi-spectral imagers offer possibilities for distant mapping of specific skin parameters, thus facilitating better diagnostics of skin malformations. Autofluorescence intensity and photobleaching rate imagers show a promising potential for skin tumor identification and margin delineation. Photoplethysmography video-imagers ensure remote detection of cutaneous blood pulsations and can provide real-time information on cardiovascular parameters and anesthesia efficiency. Multimodal skin imagers perform several of the abovementioned functions by taking a number of spectral and video images with the same image sensor. Design details of the developed prototypes and results of clinical tests illustrating their functionality are presented and discussed. PMID:28534815

  2. Multi Objective Optimization of Multi Wall Carbon Nanotube Based Nanogrinding Wheel Using Grey Relational and Regression Analysis

    NASA Astrophysics Data System (ADS)

    Sethuramalingam, Prabhu; Vinayagam, Babu Kupusamy

    2016-07-01

    Carbon nanotube mixed grinding wheel is used in the grinding process to analyze the surface characteristics of AISI D2 tool steel material. Till now no work has been carried out using carbon nanotube based grinding wheel. Carbon nanotube based grinding wheel has excellent thermal conductivity and good mechanical properties which are used to improve the surface finish of the workpiece. In the present study, the multi response optimization of process parameters like surface roughness and metal removal rate of grinding process of single wall carbon nanotube (CNT) in mixed cutting fluids is undertaken using orthogonal array with grey relational analysis. Experiments are performed with designated grinding conditions obtained using the L9 orthogonal array. Based on the results of the grey relational analysis, a set of optimum grinding parameters is obtained. Using the analysis of variance approach the significant machining parameters are found. Empirical model for the prediction of output parameters has been developed using regression analysis and the results are compared empirically, for conditions of with and without CNT grinding wheel in grinding process.

  3. Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.

    PubMed

    Han, Youkyung; Oh, Jaehong

    2018-05-17

    For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.

  4. The image evaluation of iterative motion correction reconstruction algorithm PROPELLER T2-weighted imaging compared with MultiVane T2-weighted imaging

    NASA Astrophysics Data System (ADS)

    Lee, Suk-Jun; Yu, Seung-Man

    2017-08-01

    The purpose of this study was to evaluate the usefulness and clinical applications of MultiVaneXD which was applying iterative motion correction reconstruction algorithm T2-weighted images compared with MultiVane images taken with a 3T MRI. A total of 20 patients with suspected pathologies of the liver and pancreatic-biliary system based on clinical and laboratory findings underwent upper abdominal MRI, acquired using the MultiVane and MultiVaneXD techniques. Two reviewers analyzed the MultiVane and MultiVaneXD T2-weighted images qualitatively and quantitatively. Each reviewer evaluated vessel conspicuity by observing motion artifacts and the sharpness of the portal vein, hepatic vein, and upper organs. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated by one reviewer for quantitative analysis. The interclass correlation coefficient was evaluated to measure inter-observer reliability. There were significant differences between MultiVane and MultiVaneXD in motion artifact evaluation. Furthermore, MultiVane was given a better score than MultiVaneXD in abdominal organ sharpness and vessel conspicuity, but the difference was insignificant. The reliability coefficient values were over 0.8 in every evaluation. MultiVaneXD (2.12) showed a higher value than did MultiVane (1.98), but the difference was insignificant ( p = 0.135). MultiVaneXD is a motion correction method that is more advanced than MultiVane, and it produced an increased SNR, resulting in a greater ability to detect focal abdominal lesions.

  5. Flow cytometric HyPer-based assay for hydrogen peroxide.

    PubMed

    Lyublinskaya, O G; Antonov, S A; Gorokhovtsev, S G; Pugovkina, N A; Kornienko, Ju S; Ivanova, Ju S; Shatrova, A N; Aksenov, N D; Zenin, V V; Nikolsky, N N

    2018-05-30

    HyPer is a genetically encoded fluorogenic sensor for hydrogen peroxide which is generally used for the ratiometric imaging of H 2 O 2 fluxes in living cells. Here, we demonstrate the advantages of HyPer-based ratiometric flow cytometry assay for H 2 O 2 , by using K562 and human mesenchymal stem cell lines expressing HyPer. We show that flow cytometry analysis is suitable to detect HyPer response to submicromolar concentrations of extracellularly added H 2 O 2 that is much lower than concentrations addressed previously in the other HyPer-based assays (such as cell imaging or fluorimetry). Suggested technique is also much more sensitive to hydrogen peroxide than the widespread flow cytometry assay exploiting H 2 O 2 -reactive dye H 2 DCFDA and, contrary to the H 2 DCFDA-based assay, can be employed for the kinetic studies of H 2 O 2 utilization by cells, including measurements of the rate constants of H 2 O 2 removal. In addition, flow cytometry multi-parameter ratiometric measurements enable rapid and high-throughput detection of endogenously generated H 2 O 2 in different subpopulations of HyPer-expressing cells. To sum up, HyPer can be used in multi-parameter flow cytometry studies as a highly sensitive indicator of intracellular H 2 O 2 . Copyright © 2018. Published by Elsevier Inc.

  6. Implementation and image processing of a multi-focusing bionic compound eye

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Guo, Yongcai; Luo, Jiasai

    2018-01-01

    In this paper, a new BCE with multi-focusing microlens array (MLA) was proposed. The BCE consist of detachable micro-hole array (MHA), multi-focusing MLA and spherical substrate, thus allowing it to have a large FOV without crosstalk and stray light. The MHA was fabricated by the precision machining and the parameters of the microlens varied depend on the aperture of micro-hole, through which the implementation of the multi-focusing MLA was realized under the negative pressure. Without the pattern transfer and substrate reshaping, the whole fabrication method was capable of accomplishing within several minutes by using microinjection technology. Furthermore, the method is cost-effective and easy for operation, thus providing a feasible method for the mass production of the BCE. The corresponding image processing was used to realize the image stitching for the sub-image of each single microlens, which offering an integral image in large FOV. The image stitching was implemented through the overlap between the adjacent sub-images and the feature points between the adjacent sub-images were captured by the Harris point detection. By using the adaptive non-maximal suppression, numerous potential mismatching points were eliminated and the algorithm efficiency was proved effectively. Following this, the random sample consensus (RANSAC) was used for feature points matching, by which the relation of projection transformation of the image is obtained. The implementation of the accurate image matching was then realized after the smooth transition by weighted average method. Experimental results indicate that the image-stitching algorithm can be applied for the curved BCE in large field.

  7. A multi-frequency iterative imaging method for discontinuous inverse medium problem

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Feng, Lixin

    2018-06-01

    The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.

  8. A new multi-spectral feature level image fusion method for human interpretation

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-03-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  9. Level 2 Ancillary Products and Datasets Algorithm Theoretical Basis

    NASA Technical Reports Server (NTRS)

    Diner, D.; Abdou, W.; Gordon, H.; Kahn, R.; Knyazikhin, Y.; Martonchik, J.; McDonald, D.; McMuldroch, S.; Myneni, R.; West, R.

    1999-01-01

    This Algorithm Theoretical Basis (ATB) document describes the algorithms used to generate the parameters of certain ancillary products and datasets used during Level 2 processing of Multi-angle Imaging SpectroRadiometer (MIST) data.

  10. Generation of synthetic CT using multi-scale and dual-contrast patches for brain MRI-only external beam radiotherapy.

    PubMed

    Aouadi, Souha; Vasic, Ana; Paloor, Satheesh; Torfeh, Tarraf; McGarry, Maeve; Petric, Primoz; Riyas, Mohamed; Hammoud, Rabih; Al-Hammadi, Noora

    2017-10-01

    To create a synthetic CT (sCT) from conventional brain MRI using a patch-based method for MRI-only radiotherapy planning and verification. Conventional T1 and T2-weighted MRI and CT datasets from 13 patients who underwent brain radiotherapy were included in a retrospective study whereas 6 patients were tested prospectively. A new contribution to the Non-local Means Patch-Based Method (NMPBM) framework was done with the use of novel multi-scale and dual-contrast patches. Furthermore, the training dataset was improved by pre-selecting the closest database patients to the target patient for computation time/accuracy balance. sCT and derived DRRs were assessed visually and quantitatively. VMAT planning was performed on CT and sCT for hypothetical PTVs in homogeneous and heterogeneous regions. Dosimetric analysis was done by comparing Dose Volume Histogram (DVH) parameters of PTVs and organs at risk (OARs). Positional accuracy of MRI-only image-guided radiation therapy based on CBCT or kV images was evaluated. The retrospective (respectively prospective) evaluation of the proposed Multi-scale and Dual-contrast Patch-Based Method (MDPBM) gave a mean absolute error MAE=99.69±11.07HU (98.95±8.35HU), and a Dice in bones DI bone =83±0.03 (0.82±0.03). Good agreement with conventional planning techniques was obtained; the highest percentage of DVH metric deviations was 0.43% (0.53%) for PTVs and 0.59% (0.75%) for OARs. The accuracy of sCT/CBCT or DRR sCT /kV images registration parameters was <2mm and <2°. Improvements with MDPBM, compared to NMPBM, were significant. We presented a novel method for sCT generation from T1 and T2-weighted MRI potentially suitable for MRI-only external beam radiotherapy in brain sites. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. Application of Multi-Parameter Data Visualization by Means of Multidimensional Scaling to Evaluate Possibility of Coal Gasification

    NASA Astrophysics Data System (ADS)

    Jamróz, Dariusz; Niedoba, Tomasz; Surowiak, Agnieszka; Tumidajski, Tadeusz; Szostek, Roman; Gajer, Mirosław

    2017-09-01

    The application of methods drawing upon multi-parameter visualization of data by transformation of multidimensional space into two-dimensional one allow to show multi-parameter data on computer screen. Thanks to that, it is possible to conduct a qualitative analysis of this data in the most natural way for human being, i.e. by the sense of sight. An example of such method of multi-parameter visualization is multidimensional scaling. This method was used in this paper to present and analyze a set of seven-dimensional data obtained from Janina Mining Plant and Wieczorek Coal Mine. It was decided to examine whether the method of multi-parameter data visualization allows to divide the samples space into areas of various applicability to fluidal gasification process. The "Technological applicability card for coals" was used for this purpose [Sobolewski et al., 2012; 2017], in which the key parameters, important and additional ones affecting the gasification process were described.

  12. Radar velocity determination using direction of arrival measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin W.; Bickel, Douglas L.; Naething, Richard M.

    The various technologies presented herein relate to utilizing direction of arrival (DOA) data to determine various flight parameters for an aircraft A plurality of radar images (e.g., SAR images) can be analyzed to identify a plurality of pixels in the radar images relating to one or more ground targets. In an embodiment, the plurality of pixels can be selected based upon the pixels exceeding a SNR threshold. The DOA data in conjunction with a measurable Doppler frequency for each pixel can be obtained. Multi-aperture technology enables derivation of an independent measure of DOA to each pixel based on interferometric analysis.more » This independent measure of DOA enables decoupling of the aircraft velocity from the DOA in a range-Doppler map, thereby enabling determination of a radar velocity. The determined aircraft velocity can be utilized to update an onboard INS, and to keep it aligned, without the need for additional velocity-measuring instrumentation.« less

  13. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  14. High performance multi-spectral interrogation for surface plasmon resonance imaging sensors.

    PubMed

    Sereda, A; Moreau, J; Canva, M; Maillart, E

    2014-04-15

    Surface plasmon resonance (SPR) sensing has proven to be a valuable tool in the field of surface interactions characterization, especially for biomedical applications where label-free techniques are of particular interest. In order to approach the theoretical resolution limit, most SPR-based systems have turned to either angular or spectral interrogation modes, which both offer very accurate real-time measurements, but at the expense of the 2-dimensional imaging capability, therefore decreasing the data throughput. In this article, we show numerically and experimentally how to combine the multi-spectral interrogation technique with 2D-imaging, while finding an optimum in terms of resolution, accuracy, acquisition speed and reduction in data dispersion with respect to the classical reflectivity interrogation mode. This multi-spectral interrogation methodology is based on a robust five parameter fitting of the spectral reflectivity curve which enables monitoring of the reflectivity spectral shift with a resolution of the order of ten picometers, and using only five wavelength measurements per point. In fine, such multi-spectral based plasmonic imaging system allows biomolecular interaction monitoring in a linear regime independently of variations of buffer optical index, which is illustrated on a DNA-DNA model case. © 2013 Elsevier B.V. All rights reserved.

  15. Quadratic trigonometric B-spline for image interpolation using GA

    PubMed Central

    Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation. PMID:28640906

  16. Quadratic trigonometric B-spline for image interpolation using GA.

    PubMed

    Hussain, Malik Zawwar; Abbas, Samreen; Irshad, Misbah

    2017-01-01

    In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.

  17. Diffraction effects incorporated design of a parallax barrier for a high-density multi-view autostereoscopic 3D display.

    PubMed

    Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu

    2016-02-22

    We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7.

  18. Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.

    PubMed

    Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B

    2016-01-01

    We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.

  19. Automated connectionist-geostatistical classification as an approach to identify sea ice and land ice types, properties and provinces

    NASA Astrophysics Data System (ADS)

    Goetz-Weiss, L. R.; Herzfeld, U. C.; Trantow, T.; Hunke, E. C.; Maslanik, J. A.; Crocker, R. I.

    2016-12-01

    An important problem in model-data comparison is the identification of parameters that can be extracted from observational data as well as used in numerical models, which are typically based on idealized physical processes. Here, we present a suite of approaches to characterization and classification of sea ice and land ice types, properties and provinces based on several types of remote-sensing data. Applications will be given to not only illustrate the approach, but employ it in model evaluation and understanding of physical processes. (1) In a geostatistical characterization, spatial sea-ice properties in the Chukchi and Beaufort Sea and in Elsoon Lagoon are derived from analysis of RADARSAT and ERS-2 SAR data. (2) The analysis is taken further by utilizing multi-parameter feature vectors as inputs for unsupervised and supervised statistical classification, which facilitates classification of different sea-ice types. (3) Characteristic sea-ice parameters, as resultant from the classification, can then be applied in model evaluation, as demonstrated for the ridging scheme of the Los Alamos sea ice model, CICE, using high-resolution altimeter and image data collected from unmanned aircraft over Fram Strait during the Characterization of Arctic Sea Ice Experiment (CASIE). The characteristic parameters chosen in this application are directly related to deformation processes, which also underly the ridging scheme. (4) The method that is capable of the most complex classification tasks is the connectionist-geostatistical classification method. This approach has been developed to identify currently up to 18 different crevasse types in order to map progression of the surge through the complex Bering-Bagley Glacier System, Alaska, in 2011-2014. The analysis utilizes airborne altimeter data and video image data and satellite image data. Results of the crevasse classification are compare to fracture modeling and found to match.

  20. Research on remote sensing identification of rural abandoned homesteads using multiparameter characteristics method

    NASA Astrophysics Data System (ADS)

    Xu, Saiping; Zhao, Qianjun; Yin, Kai; Cui, Bei; Zhang, Xiupeng

    2016-10-01

    Hollow village is a special phenomenon in the process of urbanization in China, which causes the waste of land resources. Therefore, it's imminent to carry out the hollow village recognition and renovation. However, there are few researches on the remote sensing identification of hollow village. In this context, in order to recognize the abandoned homesteads by remote sensing technique, the experiment was carried out as follows. Firstly, Gram-Schmidt transform method was utilized to complete the image fusion between multi-spectral images and panchromatic image of WorldView-2. Then the fusion images were made edge enhanced by high pass filtering. The multi-resolution segmentation and spectral difference segmentation were carried out to obtain the image objects. Secondly, spectral characteristic parameters were calculated, such as the normalized difference vegetation index (NDVI), the normalized difference water index (NDWI), the normalized difference Soil index (NDSI) etc. The shape feature parameters were extracted, such as Area, Length/Width Ratio and Rectangular Fit etc.. Thirdly, the SEaTH algorithm was used to determine the thresholds and optimize the feature space. Furthermore, the threshold classification method and the random forest classifier were combined, and the appropriate amount of samples were selected to train the classifier in order to determine the important feature parameters and the best classifier parameters involved in classification. Finally, the classification results was verified by computing the confusion matrix. The classification results were continuous and the phenomenon of salt and pepper using pixel classification was avoided effectively. In addition, the results showed that the extracted Abandoned Homesteads were in complete shapes, which could be distinguished from those confusing classes such as Homestead in Use and Roads.

  1. Hybrid Geometric Calibration Method for Multi-Platform Spaceborne SAR Image with Sparse Gcps

    NASA Astrophysics Data System (ADS)

    Lv, G.; Tang, X.; Ai, B.; Li, T.; Chen, Q.

    2018-04-01

    Geometric calibration is able to provide high-accuracy geometric coordinates of spaceborne SAR image through accurate geometric parameters in the Range-Doppler model by ground control points (GCPs). However, it is very difficult to obtain GCPs that covering large-scale areas, especially in the mountainous regions. In addition, the traditional calibration method is only used for single platform SAR images and can't support the hybrid geometric calibration for multi-platform images. To solve the above problems, a hybrid geometric calibration method for multi-platform spaceborne SAR images with sparse GCPs is proposed in this paper. First, we calibrate the master image that contains GCPs. Secondly, the point tracking algorithm is used to obtain the tie points (TPs) between the master and slave images. Finally, we calibrate the slave images using TPs as the GCPs. We take the Beijing-Tianjin- Hebei region as an example to study SAR image hybrid geometric calibration method using 3 TerraSAR-X images, 3 TanDEM-X images and 5 GF-3 images covering more than 235 kilometers in the north-south direction. Geometric calibration of all images is completed using only 5 GCPs. The GPS data extracted from GNSS receiver are used to assess the plane accuracy after calibration. The results after geometric calibration with sparse GCPs show that the geometric positioning accuracy is 3 m for TSX/TDX images and 7.5 m for GF-3 images.

  2. Mapping power-law rheology of living cells using multi-frequency force modulation atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, Ryosuke; Okajima, Takaharu, E-mail: okajima@ist.hokudai.ac.jp

    We present multi-frequency force modulation atomic force microscopy (AFM) for mapping the complex shear modulus G* of living cells as a function of frequency over the range of 50–500 Hz in the same measurement time as the single-frequency force modulation measurement. The AFM technique enables us to reconstruct image maps of rheological parameters, which exhibit a frequency-dependent power-law behavior with respect to G{sup *}. These quantitative rheological measurements reveal a large spatial variation in G* in this frequency range for single cells. Moreover, we find that the reconstructed images of the power-law rheological parameters are much different from those obtained inmore » force-curve or single-frequency force modulation measurements. This indicates that the former provide information about intracellular mechanical structures of the cells that are usually not resolved with the conventional force measurement methods.« less

  3. Multi-slice Fractional Ventilation Imaging in Large Animals with Hyperpolarized Gas MRI

    PubMed Central

    Emami, Kiarash; Xu, Yinan; Hamedani, Hooman; Xin, Yi; Profka, Harrilla; Rajaei, Jennia; Kadlecek, Stephen; Ishii, Masaru; Rizi, Rahim R.

    2012-01-01

    Noninvasive assessment of regional lung ventilation is of critical importance in quantifying the severity of disease and evaluating response to therapy in many pulmonary diseases. This work presents for the first time the implementation of a hyperpolarized (HP) gas MRI technique for measuring whole-lung regional fractional ventilation (r) in Yorkshire pigs (n = 5) through the use of a gas mixing and delivery device in supine position. The proposed technique utilizes a series of back-to-back HP gas breaths with images acquired during short end-inspiratory breath-holds. In order to decouple the RF pulse decay effect from ventilatory signal build-up in the airways, regional distribution of flip angle (α) was estimated in the imaged slices by acquiring a series of back-to-back images with no inter-scan time delay during a breath-hold at the tail-end of the ventilation sequence. Analysis was performed to assess the multi-slice ventilation model sensitivity to noise, oxygen and number of flip angle images. The optimal α value was determined based on minimizing the error in r estimation; αopt = 5–6° for the set of acquisition parameters in pigs. The mean r values for the group of pigs were 0.27±0.09, 0.35±0.06, 0.40±0.04 for ventral, middle and dorsal slices, respectively, (excluding conductive airways r > 0.9). A positive gravitational (ventral-dorsal) ventilation gradient effect was present in all animals. The trachea and major conductive airways showed a uniform near-unity r value, with progressively smaller values corresponding to smaller diameter airways, and ultimately leading to lung parenchyma. Results demonstrate the feasibility of measurements of fractional ventilation in large species, and provides a platform to address technical challenges associated with long breathing time scales through the optimization of acquisition parameters in species with a pulmonary physiology very similar to that of human beings. PMID:22290603

  4. Material identification in x-ray microscopy and micro CT using multi-layer, multi-color scintillation detectors

    PubMed Central

    Modgil, Dimple; Rigie, David S.; Wang, Yuxin; Xiao, Xianghui; Vargas, Phillip A.; La Rivière, Patrick J.

    2015-01-01

    We demonstrate that a dual-layer, dual-color scintillator construct for microscopic CT, originally proposed to increase sensitivity in synchrotron imaging, can also be used to perform material quantification and classification when coupled with polychromatic illumination. We consider two different approaches to data handling: (1) a data-domain material decomposition whose estimation performance can be characterized by the Cramer-Rao Lower Bound formalism but which requires careful calibration and (2) an image-domain material classification approach that is more robust to calibration errors. The data-domain analysis indicates that useful levels of SNR (>5) could be achieved in one second or less at typical bending magnet fluxes for relatively large amounts of contrast (several mm path length, such as in a fluid flow experiment) and at typical undulator fluxes for small amount of contrast (tens of microns path length, such as an angiography experiment). The tools introduced could of course be used to study and optimize parameters for a wider range of potential applications. The image domain approach was analyzed in terms of its ability to distinguish different elemental stains by characterizing the angle between the lines traced out in a two-dimensional space of effective attenuation coefficient in the front and back layer images. This approach was implemented at a synchrotron and the results were consistent with simulation predictions. PMID:26422059

  5. Material identification in x-ray microscopy and micro CT using multi-layer, multi-color scintillation detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modgil, Dimple; Rigie, David S.; Wang, Yuxin

    We demonstrate that a dual-layer, dual-color scintillator construct for microscopic CT, originally proposed to increase sensitivity in synchrotron imaging, can also be used to perform material quantification and classification when coupled with polychromatic illumination. We consider two different approaches to data handling: (1) a data-domain material decomposition whose estimation performance can be characterized by the Cramer-Rao lower bound formalism but which requires careful calibration and (2) an image-domain material classification approach that is more robust to calibration errors. The data-domain analysis indicates that useful levels of SNR (>5) could be achieved in one second or less at typical bending magnetmore » fluxes for relatively large amounts of contrast (several mm path length, such as in a fluid flow experiment) and at typical undulator fluxes for small amount of contrast (tens of microns path length, such as an angiography experiment). The tools introduced could of course be used to study and optimize parameters for a wider range of potential applications. The image domain approach was analyzed in terms of its ability to distinguish different elemental stains by characterizing the angle between the lines traced out in a two-dimensional space of effective attenuation coefficient in the front and back layer images. As a result, this approach was implemented at a synchrotron and the results were consistent with simulation predictions.« less

  6. Impact of hypoxia and the metabolic microenvironment on radiotherapy of solid tumors. Introduction of a multi-institutional research project.

    PubMed

    Zips, Daniel; Adam, Markus; Flentje, Michael; Haase, Axel; Molls, Michael; Mueller-Klieser, Wolfgang; Petersen, Cordula; Philbrook, Christine; Schmitt, Peter; Thews, Oliver; Walenta, Stefan; Baumann, Michael

    2004-10-01

    Recent developments in imaging technology and tumor biology have led to new techniques to detect hypoxia and related alterations of the metabolic microenvironment in tumors. However, whether these new methods can predict radiobiological hypoxia and outcome after fractionated radiotherapy still awaits experimental evaluation. The present article will introduce a multi-institutional research project addressing the impact of hypoxia and the metabolic microenvironment on radiotherapy of solid tumors. The four laboratories involved are situated at the universities of Dresden, Mainz, Munich and Würzburg, Germany. The joint scientific project started to collect data obtained on a set of ten different human tumor xenografts growing in nude mice by applying various imaging techniques to detect tumor hypoxia and related parameters of the metabolic microenvironment. These techniques include magnetic resonance imaging and spectroscopy, metabolic mapping with quantitative bioluminescence and single-photon imaging, histological multiparameter analysis of biochemical hypoxia, perfusion and vasculature, and immunohistochemistry of factors related to angiogenesis, invasion and metastasis. To evaluate the different methods, baseline functional radiobiological data including radiobiological hypoxic fraction and outcome after fractionated irradiation will be determined. Besides increasing our understanding of tumor biology, the project will focus on new, clinically applicable strategies for microenvironment profiling and will help to identify those patients that might benefit from targeted interventions to improve tumor oxygenation.

  7. Material identification in x-ray microscopy and micro CT using multi-layer, multi-color scintillation detectors

    DOE PAGES

    Modgil, Dimple; Rigie, David S.; Wang, Yuxin; ...

    2015-09-30

    We demonstrate that a dual-layer, dual-color scintillator construct for microscopic CT, originally proposed to increase sensitivity in synchrotron imaging, can also be used to perform material quantification and classification when coupled with polychromatic illumination. We consider two different approaches to data handling: (1) a data-domain material decomposition whose estimation performance can be characterized by the Cramer-Rao lower bound formalism but which requires careful calibration and (2) an image-domain material classification approach that is more robust to calibration errors. The data-domain analysis indicates that useful levels of SNR (>5) could be achieved in one second or less at typical bending magnetmore » fluxes for relatively large amounts of contrast (several mm path length, such as in a fluid flow experiment) and at typical undulator fluxes for small amount of contrast (tens of microns path length, such as an angiography experiment). The tools introduced could of course be used to study and optimize parameters for a wider range of potential applications. The image domain approach was analyzed in terms of its ability to distinguish different elemental stains by characterizing the angle between the lines traced out in a two-dimensional space of effective attenuation coefficient in the front and back layer images. As a result, this approach was implemented at a synchrotron and the results were consistent with simulation predictions.« less

  8. Optimization of Brain T2 Mapping Using Standard CPMG Sequence In A Clinical Scanner

    NASA Astrophysics Data System (ADS)

    Hnilicová, P.; Bittšanský, M.; Dobrota, D.

    2014-04-01

    In magnetic resonance imaging, transverse relaxation time (T2) mapping is a useful quantitative tool enabling enhanced diagnostics of many brain pathologies. The aim of our study was to test the influence of different sequence parameters on calculated T2 values, including multi-slice measurements, slice position, interslice gap, echo spacing, and pulse duration. Measurements were performed using standard multi-slice multi-echo CPMG imaging sequence on a 1.5 Tesla routine whole body MR scanner. We used multiple phantoms with different agarose concentrations (0 % to 4 %) and verified the results on a healthy volunteer. It appeared that neither the pulse duration, the size of interslice gap nor the slice shift had any impact on the T2. The measurement accuracy was increased with shorter echo spacing. Standard multi-slice multi-echo CPMG protocol with the shortest echo spacing, also the smallest available interslice gap (100 % of slice thickness) and shorter pulse duration was found to be optimal and reliable for calculating T2 maps in the human brain.

  9. Aerosol and Surface Parameter Retrievals for a Multi-Angle, Multiband Spectrometer

    NASA Technical Reports Server (NTRS)

    Broderick, Daniel

    2012-01-01

    This software retrieves the surface and atmosphere parameters of multi-angle, multiband spectra. The synthetic spectra are generated by applying the modified Rahman-Pinty-Verstraete Bidirectional Reflectance Distribution Function (BRDF) model, and a single-scattering dominated atmosphere model to surface reflectance data from Multiangle Imaging SpectroRadiometer (MISR). The aerosol physical model uses a single scattering approximation using Rayleigh scattering molecules, and Henyey-Greenstein aerosols. The surface and atmosphere parameters of the models are retrieved using the Lavenberg-Marquardt algorithm. The software can retrieve the surface and atmosphere parameters with two different scales. The surface parameters are retrieved pixel-by-pixel while the atmosphere parameters are retrieved for a group of pixels where the same atmosphere model parameters are applied. This two-scale approach allows one to select the natural scale of the atmosphere properties relative to surface properties. The software also takes advantage of an intelligent initial condition given by the solution of the neighbor pixels.

  10. Automated analysis and classification of melanocytic tumor on skin whole slide images.

    PubMed

    Xu, Hongming; Lu, Cheng; Berendt, Richard; Jha, Naresh; Mandal, Mrinal

    2018-06-01

    This paper presents a computer-aided technique for automated analysis and classification of melanocytic tumor on skin whole slide biopsy images. The proposed technique consists of four main modules. First, skin epidermis and dermis regions are segmented by a multi-resolution framework. Next, epidermis analysis is performed, where a set of epidermis features reflecting nuclear morphologies and spatial distributions is computed. In parallel with epidermis analysis, dermis analysis is also performed, where dermal cell nuclei are segmented and a set of textural and cytological features are computed. Finally, the skin melanocytic image is classified into different categories such as melanoma, nevus or normal tissue by using a multi-class support vector machine (mSVM) with extracted epidermis and dermis features. Experimental results on 66 skin whole slide images indicate that the proposed technique achieves more than 95% classification accuracy, which suggests that the technique has the potential to be used for assisting pathologists on skin biopsy image analysis and classification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. High-speed Particle Image Velocimetry Near Surfaces

    PubMed Central

    Lu, Louise; Sick, Volker

    2013-01-01

    Multi-dimensional and transient flows play a key role in many areas of science, engineering, and health sciences but are often not well understood. The complex nature of these flows may be studied using particle image velocimetry (PIV), a laser-based imaging technique for optically accessible flows. Though many forms of PIV exist that extend the technique beyond the original planar two-component velocity measurement capabilities, the basic PIV system consists of a light source (laser), a camera, tracer particles, and analysis algorithms. The imaging and recording parameters, the light source, and the algorithms are adjusted to optimize the recording for the flow of interest and obtain valid velocity data. Common PIV investigations measure two-component velocities in a plane at a few frames per second. However, recent developments in instrumentation have facilitated high-frame rate (> 1 kHz) measurements capable of resolving transient flows with high temporal resolution. Therefore, high-frame rate measurements have enabled investigations on the evolution of the structure and dynamics of highly transient flows. These investigations play a critical role in understanding the fundamental physics of complex flows. A detailed description for performing high-resolution, high-speed planar PIV to study a transient flow near the surface of a flat plate is presented here. Details for adjusting the parameter constraints such as image and recording properties, the laser sheet properties, and processing algorithms to adapt PIV for any flow of interest are included. PMID:23851899

  12. Immunomagnetic cell separation, imaging, and analysis using Captivate ferrofluids

    NASA Astrophysics Data System (ADS)

    Jones, Laurie; Beechem, Joseph M.

    2002-05-01

    We have developed applications of CaptivateTM ferrofluids, paramagnetic particles (approximately 200 nm diameter), for isolating and analyzing cell populations in combination with fluorescence-based techniques. Using a microscope-mounted magnetic yoke and sample insertion chamber, fluorescent images of magnetically captured cells were obtained in culture media, buffer, or whole blood, while non-magnetically labeled cells sedimented to the bottom of the chamber. We combined this immunomagnetic cell separation and imaging technique with fluorescent staining, spectroscopy, and analysis to evaluate cell surface receptor-containing subpopulations, live/dead cell ratios, apoptotic/dead cell ratios, etc. The acquired images were analyzed using multi-color parameters, as produced by nucleic acid staining, esterase activity, or antibody labeling. In addition, the immunomagnetically separated cell fractions were assessed through microplate analysis using the CyQUANT Cell Proliferation Assay. These methods should provide an inexpensive alternative to some flow cytometric measurements. The binding capacities of the streptavidin- labled Captivate ferrofluid (SA-FF) particles were determined to be 8.8 nmol biotin/mg SA-FF, using biotin-4- fluorescein, and > 106 cells/mg SA-FF, using several cell types labeled with biotinylated probes. For goat anti- mouse IgG-labeled ferrofluids (GAM-FF), binding capacities were established to be approximately 0.2 - 7.5 nmol protein/mg GAM-FF using fluorescent conjugates of antibodies, protein G, and protein A.

  13. A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes

    PubMed Central

    2011-01-01

    Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Conclusions Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views. PMID:21251284

  14. A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.

    PubMed

    Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M

    2011-01-20

    A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views.

  15. Multi-center prediction of hemorrhagic transformation in acute ischemic stroke using permeability imaging features.

    PubMed

    Scalzo, Fabien; Alger, Jeffry R; Hu, Xiao; Saver, Jeffrey L; Dani, Krishna A; Muir, Keith W; Demchuk, Andrew M; Coutts, Shelagh B; Luby, Marie; Warach, Steven; Liebeskind, David S

    2013-07-01

    Permeability images derived from magnetic resonance (MR) perfusion images are sensitive to blood-brain barrier derangement of the brain tissue and have been shown to correlate with subsequent development of hemorrhagic transformation (HT) in acute ischemic stroke. This paper presents a multi-center retrospective study that evaluates the predictive power in terms of HT of six permeability MRI measures including contrast slope (CS), final contrast (FC), maximum peak bolus concentration (MPB), peak bolus area (PB), relative recirculation (rR), and percentage recovery (%R). Dynamic T2*-weighted perfusion MR images were collected from 263 acute ischemic stroke patients from four medical centers. An essential aspect of this study is to exploit a classifier-based framework to automatically identify predictive patterns in the overall intensity distribution of the permeability maps. The model is based on normalized intensity histograms that are used as input features to the predictive model. Linear and nonlinear predictive models are evaluated using a cross-validation to measure generalization power on new patients and a comparative analysis is provided for the different types of parameters. Results demonstrate that perfusion imaging in acute ischemic stroke can predict HT with an average accuracy of more than 85% using a predictive model based on a nonlinear regression model. Results also indicate that the permeability feature based on the percentage of recovery performs significantly better than the other features. This novel model may be used to refine treatment decisions in acute stroke. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Comparison of fan beam, slit-slat and multi-pinhole collimators for molecular breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    van Roosmalen, Jarno; Beekman, Freek J.; Goorden, Marlies C.

    2018-05-01

    Recently, we proposed and optimized dedicated multi-pinhole molecular breast tomosynthesis (MBT) that images a lightly compressed breast. As MBT may also be performed with other types of collimators, the aim of this paper is to optimize MBT with fan beam and slit-slat collimators and to compare its performance to that of multi-pinhole MBT to arrive at a truly optimized design. Using analytical expressions, we first optimized fan beam and slit-slat collimator parameters to reach maximum sensitivity at a series of given system resolutions. Additionally, we performed full system simulations of a breast phantom containing several tumours for the optimized designs. We found that at equal system resolution the maximum achievable sensitivity increases from pinhole to slit-slat to fan beam collimation with fan beam and slit-slat MBT having on average a 48% and 20% higher sensitivity than multi-pinhole MBT. Furthermore, by inspecting simulated images and applying a tumour-to-background contrast-to-noise (TB-CNR) analysis, we found that slit-slat collimators underperform with respect to the other collimator types. The fan beam collimators obtained a similar TB-CNR as the pinhole collimators, but the optimum was reached at different system resolutions. For fan beam collimators, a 6–8 mm system resolution was optimal in terms of TB-CNR, while with pinhole collimation highest TB-CNR was reached in the 7–10 mm range.

  17. In Situ Microphysical and Scattering Properties of Falling Snow in GPM-GCPEx

    NASA Astrophysics Data System (ADS)

    Duffy, G.; Nesbitt, S. W.; McFarquhar, G. M.; Poellot, M.; Chandrasekar, C. V.; Hudak, D. R.

    2013-12-01

    The Global Precipitation Measurement Cold-season Precipitation Experiment (GPM-GCPEx) field campaign was conducted near Egbert, Ontario, Canada in January-February 2012 to study the physical characteristics and microwave radiative properties of the column of hydrometeors in cold season precipitation events. Extensive in situ aircraft profiling was conducted with the University of North Dakota (UND) Citation aircraft within the volume of several remote sensing instruments within a wide variety of precipitation events, from snow to freezing drizzle. Several of the primary goals of GCPEx include improving our understanding of the microphysical characteristics of falling snow and how those characteristics relate to the multi-wavelength radiative characteristics In this study, particle size distribution parameters, effective particle densities, and habit distributions are determined using in-situ cloud measurements obtained on the UND citation using the High Volume Precipitation Spectrometer, the Cloud Particle Imager, and the Cloud Imaging Probe. These quantities are matched compared to multi-frequency radar measurements from the Environment Canada King City C-Band and NASA D3R Ku-Ka Band dual polarization radars. These analysis composites provide the basis for direct evaluation of particle size distributions and observed multi-wavelength and multi-polarization radar observations, including radar reflectivity, differential reflectivity, and dual wavelength ratio) in falling snow at weather radar and GPM radar frequencies. Theoretical predictions from Mie, Rayleigh-Gans, and more complex snowflake aggregate scattering model predictions using observed particle size distributions are compared with observed radar scattering characteristics along the Citation flight track.

  18. Multi-institutional validation of a novel textural analysis tool for preoperative stratification of suspected thyroid tumors on diffusion-weighted MRI.

    PubMed

    Brown, Anna M; Nagala, Sidhartha; McLean, Mary A; Lu, Yonggang; Scoffings, Daniel; Apte, Aditya; Gonen, Mithat; Stambuk, Hilda E; Shaha, Ashok R; Tuttle, R Michael; Deasy, Joseph O; Priest, Andrew N; Jani, Piyush; Shukla-Dave, Amita; Griffiths, John

    2016-04-01

    Ultrasound-guided fine needle aspirate cytology fails to diagnose many malignant thyroid nodules; consequently, patients may undergo diagnostic lobectomy. This study assessed whether textural analysis (TA) could noninvasively stratify thyroid nodules accurately using diffusion-weighted MRI (DW-MRI). This multi-institutional study examined 3T DW-MRI images obtained with spin echo echo planar imaging sequences. The training data set included 26 patients from Cambridge, United Kingdom, and the test data set included 18 thyroid cancer patients from Memorial Sloan Kettering Cancer Center (New York, New York, USA). Apparent diffusion coefficients (ADCs) were compared over regions of interest (ROIs) defined on thyroid nodules. TA, linear discriminant analysis (LDA), and feature reduction were performed using the 21 MaZda-generated texture parameters that best distinguished benign and malignant ROIs. Training data set mean ADC values were significantly different for benign and malignant nodules (P = 0.02) with a sensitivity and specificity of 70% and 63%, respectively, and a receiver operator characteristic (ROC) area under the curve (AUC) of 0.73. The LDA model of the top 21 textural features correctly classified 89/94 DW-MRI ROIs with 92% sensitivity, 96% specificity, and an AUC of 0.97. This algorithm correctly classified 16/18 (89%) patients in the independently obtained test set of thyroid DW-MRI scans. TA classifies thyroid nodules with high sensitivity and specificity on multi-institutional DW-MRI data sets. This method requires further validation in a larger prospective study. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance. © 2015 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  19. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme.

    PubMed

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-04-21

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.

  20. Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme

    PubMed Central

    Wang, Hao; Jiang, Jie; Zhang, Guangjun

    2017-01-01

    The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. PMID:28430132

  1. Cloud-based processing of multi-spectral imaging data

    NASA Astrophysics Data System (ADS)

    Bernat, Amir S.; Bolton, Frank J.; Weiser, Reuven; Levitz, David

    2017-03-01

    Multispectral imaging holds great promise as a non-contact tool for the assessment of tissue composition. Performing multi - spectral imaging on a hand held mobile device would allow to bring this technology and with it knowledge to low resource settings to provide a state of the art classification of tissue health. This modality however produces considerably larger data sets than white light imaging and requires preliminary image analysis for it to be used. The data then needs to be analyzed and logged, while not requiring too much of the system resource or a long computation time and battery use by the end point device. Cloud environments were designed to allow offloading of those problems by allowing end point devices (smartphones) to offload computationally hard tasks. For this end we present a method where the a hand held device based around a smartphone captures a multi - spectral dataset in a movie file format (mp4) and compare it to other image format in size, noise and correctness. We present the cloud configuration used for segmenting images to frames where they can later be used for further analysis.

  2. Computational medical imaging and hemodynamics framework for functional analysis and assessment of cardiovascular structures.

    PubMed

    Wong, Kelvin K L; Wang, Defeng; Ko, Jacky K L; Mazumdar, Jagannath; Le, Thu-Thao; Ghista, Dhanjoo

    2017-03-21

    Cardiac dysfunction constitutes common cardiovascular health issues in the society, and has been an investigation topic of strong focus by researchers in the medical imaging community. Diagnostic modalities based on echocardiography, magnetic resonance imaging, chest radiography and computed tomography are common techniques that provide cardiovascular structural information to diagnose heart defects. However, functional information of cardiovascular flow, which can in fact be used to support the diagnosis of many cardiovascular diseases with a myriad of hemodynamics performance indicators, remains unexplored to its full potential. Some of these indicators constitute important cardiac functional parameters affecting the cardiovascular abnormalities. With the advancement of computer technology that facilitates high speed computational fluid dynamics, the realization of a support diagnostic platform of hemodynamics quantification and analysis can be achieved. This article reviews the state-of-the-art medical imaging and high fidelity multi-physics computational analyses that together enable reconstruction of cardiovascular structures and hemodynamic flow patterns within them, such as of the left ventricle (LV) and carotid bifurcations. The combined medical imaging and hemodynamic analysis enables us to study the mechanisms of cardiovascular disease-causing dysfunctions, such as how (1) cardiomyopathy causes left ventricular remodeling and loss of contractility leading to heart failure, and (2) modeling of LV construction and simulation of intra-LV hemodynamics can enable us to determine the optimum procedure of surgical ventriculation to restore its contractility and health This combined medical imaging and hemodynamics framework can potentially extend medical knowledge of cardiovascular defects and associated hemodynamic behavior and their surgical restoration, by means of an integrated medical image diagnostics and hemodynamic performance analysis framework.

  3. Object-based image analysis for cadastral mapping using satellite images

    NASA Astrophysics Data System (ADS)

    Kohli, D.; Crommelinck, S.; Bennett, R.; Koeva, M.; Lemmen, C.

    2017-10-01

    Cadasters together with land registry form a core ingredient of any land administration system. Cadastral maps comprise of the extent, ownership and value of land which are essential for recording and updating land records. Traditional methods for cadastral surveying and mapping often prove to be labor, cost and time intensive: alternative approaches are thus being researched for creating such maps. With the advent of very high resolution (VHR) imagery, satellite remote sensing offers a tremendous opportunity for (semi)-automation of cadastral boundaries detection. In this paper, we explore the potential of object-based image analysis (OBIA) approach for this purpose by applying two segmentation methods, i.e. MRS (multi-resolution segmentation) and ESP (estimation of scale parameter) to identify visible cadastral boundaries. Results show that a balance between high percentage of completeness and correctness is hard to achieve: a low error of commission often comes with a high error of omission. However, we conclude that the resulting segments/land use polygons can potentially be used as a base for further aggregation into tenure polygons using participatory mapping.

  4. CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.

    PubMed

    Bray, Mark-Anthony; Carpenter, Anne E

    2015-11-04

    Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.

  5. [RS estimation of inventory parameters and carbon storage of moso bamboo forest based on synergistic use of object-based image analysis and decision tree].

    PubMed

    Du, Hua Qiang; Sun, Xiao Yan; Han, Ning; Mao, Fang Jie

    2017-10-01

    By synergistically using the object-based image analysis (OBIA) and the classification and regression tree (CART) methods, the distribution information, the indexes (including diameter at breast, tree height, and crown closure), and the aboveground carbon storage (AGC) of moso bamboo forest in Shanchuan Town, Anji County, Zhejiang Province were investigated. The results showed that the moso bamboo forest could be accurately delineated by integrating the multi-scale ima ge segmentation in OBIA technique and CART, which connected the image objects at various scales, with a pretty good producer's accuracy of 89.1%. The investigation of indexes estimated by regression tree model that was constructed based on the features extracted from the image objects reached normal or better accuracy, in which the crown closure model archived the best estimating accuracy of 67.9%. The estimating accuracy of diameter at breast and tree height was relatively low, which was consistent with conclusion that estimating diameter at breast and tree height using optical remote sensing could not achieve satisfactory results. Estimation of AGC reached relatively high accuracy, and accuracy of the region of high value achieved above 80%.

  6. Patient-specific models of cardiac biomechanics

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Adarsh; Villongco, Christopher T.; Chuang, Joyce; Frank, Lawrence R.; Nigam, Vishal; Belezzuoli, Ernest; Stark, Paul; Krummen, David E.; Narayan, Sanjiv; Omens, Jeffrey H.; McCulloch, Andrew D.; Kerckhoffs, Roy C. P.

    2013-07-01

    Patient-specific models of cardiac function have the potential to improve diagnosis and management of heart disease by integrating medical images with heterogeneous clinical measurements subject to constraints imposed by physical first principles and prior experimental knowledge. We describe new methods for creating three-dimensional patient-specific models of ventricular biomechanics in the failing heart. Three-dimensional bi-ventricular geometry is segmented from cardiac CT images at end-diastole from patients with heart failure. Human myofiber and sheet architecture is modeled using eigenvectors computed from diffusion tensor MR images from an isolated, fixed human organ-donor heart and transformed to the patient-specific geometric model using large deformation diffeomorphic mapping. Semi-automated methods were developed for optimizing the passive material properties while simultaneously computing the unloaded reference geometry of the ventricles for stress analysis. Material properties of active cardiac muscle contraction were optimized to match ventricular pressures measured by cardiac catheterization, and parameters of a lumped-parameter closed-loop model of the circulation were estimated with a circulatory adaptation algorithm making use of information derived from echocardiography. These components were then integrated to create a multi-scale model of the patient-specific heart. These methods were tested in five heart failure patients from the San Diego Veteran's Affairs Medical Center who gave informed consent. The simulation results showed good agreement with measured echocardiographic and global functional parameters such as ejection fraction and peak cavity pressures.

  7. Multi-Objective data analysis using Bayesian Inference for MagLIF experiments

    NASA Astrophysics Data System (ADS)

    Knapp, Patrick; Glinksy, Michael; Evans, Matthew; Gom, Matth; Han, Stephanie; Harding, Eric; Slutz, Steve; Hahn, Kelly; Harvey-Thompson, Adam; Geissel, Matthias; Ampleford, David; Jennings, Christopher; Schmit, Paul; Smith, Ian; Schwarz, Jens; Peterson, Kyle; Jones, Brent; Rochau, Gregory; Sinars, Daniel

    2017-10-01

    The MagLIF concept has recently demonstrated Gbar pressures and confinement of charged fusion products at stagnation. We present a new analysis methodology that allows for integration of multiple diagnostics including nuclear, x-ray imaging, and x-ray power to determine the temperature, pressure, liner areal density, and mix fraction. A simplified hot-spot model is used with a Bayesian inference network to determine the most probable model parameters that describe the observations while simultaneously revealing the principal uncertainties in the analysis. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.

  8. Stability analysis for a multi-camera photogrammetric system.

    PubMed

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-08-18

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  9. Stability Analysis for a Multi-Camera Photogrammetric System

    PubMed Central

    Habib, Ayman; Detchev, Ivan; Kwak, Eunju

    2014-01-01

    Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012

  10. Techniques in processing multi-frequency multi-polarization spaceborne SAR data

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Chang, C. Y.

    1991-01-01

    This paper presents the algorithm design of the SIR-C ground data processor, with emphasis on the unique elements involved in the production of registered multifrequency polarimetric data products. A quick-look processing algorithm used for generation of low-resolution browse image products and estimation of echo signal parameters is also presented. Specifically the discussion covers: (1) azimuth reference function generation to produce registered polarimetric imagery; (2) geometric rectification to accommondate cross-track and along-track Doppler drifts; (3) multilook filtering designed to generate output imagery with a uniform resolution; and (4) efficient coding to compress the polarimetric image data for distribution.

  11. Extra Solar Planet Science With a Non Redundant Mask

    NASA Astrophysics Data System (ADS)

    Minto, Stefenie Nicolet; Sivaramakrishnan, Anand; Greenbaum, Alexandra; St. Laurent, Kathryn; Thatte, Deeparshi

    2017-01-01

    To detect faint planetary companions near a much brighter star, at the Resolution Limit of the James Webb Space Telescope (JWST) the Near-Infrared Imager and Slitless Spectrograph (NIRISS) will use a non-redundant aperture mask (NRM) for high contrast imaging. I simulated NIRISS data of stars with and without planets, and run these through the code that measures interferometric image properties to determine how sensitive planetary detection is to our knowledge of instrumental parameters, starting with the pixel scale. I measured the position angle, distance, and contrast ratio of the planet (with respect to the star) to characterize the binary pair. To organize this data I am creating programs that will automatically and systematically explore multi-dimensional instrument parameter spaces and binary characteristics. In the future my code will also be applied to explore any other parameters we can simulate.

  12. Multi-scale Functional and Molecular Photoacoustic Tomography

    PubMed Central

    Yao, Junjie; Xia, Jun; Wang, Lihong V.

    2015-01-01

    Photoacoustic tomography (PAT) combines rich optical absorption contrast with the high spatial resolution of ultrasound at depths in tissue. The high scalability of PAT has enabled anatomical imaging of biological structures ranging from organelles to organs. The inherent functional and molecular imaging capabilities of PAT have further allowed it to measure important physiological parameters and track critical cellular activities. Integration of PAT with other imaging technologies provides complementary capabilities and can potentially accelerate the clinical translation of PAT. PMID:25933617

  13. Influences of system uncertainties on the numerical transfer path analysis of engine systems

    NASA Astrophysics Data System (ADS)

    Acri, A.; Nijman, E.; Acri, A.; Offner, G.

    2017-10-01

    Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.

  14. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  15. Genotype-phenotype association study via new multi-task learning model

    PubMed Central

    Huo, Zhouyuan; Shen, Dinggang

    2018-01-01

    Research on the associations between genetic variations and imaging phenotypes is developing with the advance in high-throughput genotype and brain image techniques. Regression analysis of single nucleotide polymorphisms (SNPs) and imaging measures as quantitative traits (QTs) has been proposed to identify the quantitative trait loci (QTL) via multi-task learning models. Recent studies consider the interlinked structures within SNPs and imaging QTs through group lasso, e.g. ℓ2,1-norm, leading to better predictive results and insights of SNPs. However, group sparsity is not enough for representing the correlation between multiple tasks and ℓ2,1-norm regularization is not robust either. In this paper, we propose a new multi-task learning model to analyze the associations between SNPs and QTs. We suppose that low-rank structure is also beneficial to uncover the correlation between genetic variations and imaging phenotypes. Finally, we conduct regression analysis of SNPs and QTs. Experimental results show that our model is more accurate in prediction than compared methods and presents new insights of SNPs. PMID:29218896

  16. Characterizing the microstructural basis of "unidentified bright objects" in neurofibromatosis type 1: A combined in vivo multicomponent T2 relaxation and multi-shell diffusion MRI analysis.

    PubMed

    Billiet, Thibo; Mädler, Burkhard; D'Arco, Felice; Peeters, Ronald; Deprez, Sabine; Plasschaert, Ellen; Leemans, Alexander; Zhang, Hui; den Bergh, Bea Van; Vandenbulcke, Mathieu; Legius, Eric; Sunaert, Stefan; Emsell, Louise

    2014-01-01

    The histopathological basis of "unidentified bright objects" (UBOs) (hyperintense regions seen on T2-weighted magnetic resonance (MR) brain scans in neurofibromatosis-1 (NF1)) remains unclear. New in vivo MRI-based techniques (multi-exponential T2 relaxation (MET2) and diffusion MR imaging (dMRI)) provide measures relating to microstructural change. We combined these methods and present previously unreported data on in vivo UBO microstructure in NF1. 3-Tesla dMRI data were acquired on 17 NF1 patients, covering 30 white matter UBOs. Diffusion tensor, kurtosis and neurite orientation and dispersion density imaging parameters were calculated within UBO sites and in contralateral normal appearing white matter (cNAWM). Analysis of MET2 parameters was performed on 24 UBO-cNAWM pairs. No significant alterations in the myelin water fraction and intra- and extracellular (IE) water fraction were found. Mean T2 time of IE water was significantly higher in UBOs. UBOs furthermore showed increased axial, radial and mean diffusivity, and decreased fractional anisotropy, mean kurtosis and neurite density index compared to cNAWM. Neurite orientation dispersion and isotropic fluid fraction were unaltered. Our results suggest that demyelination and axonal degeneration are unlikely to be present in UBOs, which appear to be mainly caused by a shift towards a higher T2-value of the intra- and extracellular water pool. This may arise from altered microstructural compartmentalization, and an increase in 'extracellular-like', intracellular water, possibly due to intramyelinic edema. These findings confirm the added value of combining dMRI and MET2 to characterize the microstructural basis of T2 hyperintensities in vivo.

  17. Model-Based Analysis for Qualitative Data: An Application in Drosophila Germline Stem Cell Regulation

    PubMed Central

    Pargett, Michael; Rundell, Ann E.; Buzzard, Gregery T.; Umulis, David M.

    2014-01-01

    Discovery in developmental biology is often driven by intuition that relies on the integration of multiple types of data such as fluorescent images, phenotypes, and the outcomes of biochemical assays. Mathematical modeling helps elucidate the biological mechanisms at play as the networks become increasingly large and complex. However, the available data is frequently under-utilized due to incompatibility with quantitative model tuning techniques. This is the case for stem cell regulation mechanisms explored in the Drosophila germarium through fluorescent immunohistochemistry. To enable better integration of biological data with modeling in this and similar situations, we have developed a general parameter estimation process to quantitatively optimize models with qualitative data. The process employs a modified version of the Optimal Scaling method from social and behavioral sciences, and multi-objective optimization to evaluate the trade-off between fitting different datasets (e.g. wild type vs. mutant). Using only published imaging data in the germarium, we first evaluated support for a published intracellular regulatory network by considering alternative connections of the same regulatory players. Simply screening networks against wild type data identified hundreds of feasible alternatives. Of these, five parsimonious variants were found and compared by multi-objective analysis including mutant data and dynamic constraints. With these data, the current model is supported over the alternatives, but support for a biochemically observed feedback element is weak (i.e. these data do not measure the feedback effect well). When also comparing new hypothetical models, the available data do not discriminate. To begin addressing the limitations in data, we performed a model-based experiment design and provide recommendations for experiments to refine model parameters and discriminate increasingly complex hypotheses. PMID:24626201

  18. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  19. Colony image acquisition and genetic segmentation algorithm and colony analyses

    NASA Astrophysics Data System (ADS)

    Wang, W. X.

    2012-01-01

    Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.

  20. Development of multi-dimensional body image scale for malaysian female adolescents

    PubMed Central

    Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin

    2008-01-01

    The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs. PMID:20126371

  1. Development of multi-dimensional body image scale for malaysian female adolescents.

    PubMed

    Chin, Yit Siew; Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin

    2008-01-01

    The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs.

  2. Optimized production planning model for a multi-plant cultivation system under uncertainty

    NASA Astrophysics Data System (ADS)

    Ke, Shunkui; Guo, Doudou; Niu, Qingliang; Huang, Danfeng

    2015-02-01

    An inexact multi-constraint programming model under uncertainty was developed by incorporating a production plan algorithm into the crop production optimization framework under the multi-plant collaborative cultivation system. In the production plan, orders from the customers are assigned to a suitable plant under the constraints of plant capabilities and uncertainty parameters to maximize profit and achieve customer satisfaction. The developed model and solution method were applied to a case study of a multi-plant collaborative cultivation system to verify its applicability. As determined in the case analysis involving different orders from customers, the period of plant production planning and the interval between orders can significantly affect system benefits. Through the analysis of uncertain parameters, reliable and practical decisions can be generated using the suggested model of a multi-plant collaborative cultivation system.

  3. Simulation of co-phase error correction of optical multi-aperture imaging system based on stochastic parallel gradient decent algorithm

    NASA Astrophysics Data System (ADS)

    He, Xiaojun; Ma, Haotong; Luo, Chuanxin

    2016-10-01

    The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.

  4. Aerial multi-camera systems: Accuracy and block triangulation issues

    NASA Astrophysics Data System (ADS)

    Rupnik, Ewelina; Nex, Francesco; Toschi, Isabella; Remondino, Fabio

    2015-03-01

    Oblique photography has reached its maturity and has now been adopted for several applications. The number and variety of multi-camera oblique platforms available on the market is continuously growing. So far, few attempts have been made to study the influence of the additional cameras on the behaviour of the image block and comprehensive revisions to existing flight patterns are yet to be formulated. This paper looks into the precision and accuracy of 3D points triangulated from diverse multi-camera oblique platforms. Its coverage is divided into simulated and real case studies. Within the simulations, different imaging platform parameters and flight patterns are varied, reflecting both current market offerings and common flight practices. Attention is paid to the aspect of completeness in terms of dense matching algorithms and 3D city modelling - the most promising application of such systems. The experimental part demonstrates the behaviour of two oblique imaging platforms in real-world conditions. A number of Ground Control Point (GCP) configurations are adopted in order to point out the sensitivity of tested imaging networks and arising block deformations. To stress the contribution of slanted views, all scenarios are compared against a scenario in which exclusively nadir images are used for evaluation.

  5. Identifying the arterial input function from dynamic contrast-enhanced magnetic resonance images using an apex-seeking technique

    NASA Astrophysics Data System (ADS)

    Martel, Anne L.

    2004-04-01

    In order to extract quantitative information from dynamic contrast-enhanced MR images (DCE-MRI) it is usually necessary to identify an arterial input function. This is not a trivial problem if there are no major vessels present in the field of view. Most existing techniques rely on operator intervention or use various curve parameters to identify suitable pixels but these are often specific to the anatomical region or the acquisition method used. They also require the signal from several pixels to be averaged in order to improve the signal to noise ratio, however this introduces errors due to partial volume effects. We have described previously how factor analysis can be used to automatically separate arterial and venous components from DCE-MRI studies of the brain but although that method works well for single slice images through the brain when the blood brain barrier technique is intact, it runs into problems for multi-slice images with more complex dynamics. This paper will describe a factor analysis method that is more robust in such situations and is relatively insensitive to the number of physiological components present in the data set. The technique is very similar to that used to identify spectral end-members from multispectral remote sensing images.

  6. Advances in combined endoscopic fluorescence confocal microscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Risi, Matthew D.

    Confocal microendoscopy provides real-time high resolution cellular level images via a minimally invasive procedure. Results from an ongoing clinical study to detect ovarian cancer with a novel confocal fluorescent microendoscope are presented. As an imaging modality, confocal fluorescence microendoscopy typically requires exogenous fluorophores, has a relatively limited penetration depth (100 μm), and often employs specialized aperture configurations to achieve real-time imaging in vivo. Two primary research directions designed to overcome these limitations and improve diagnostic capability are presented. Ideal confocal imaging performance is obtained with a scanning point illumination and confocal aperture, but this approach is often unsuitable for real-time, in vivo biomedical imaging. By scanning a slit aperture in one direction, image acquisition speeds are greatly increased, but at the cost of a reduction in image quality. The design, implementation, and experimental verification of a custom multi-point-scanning modification to a slit-scanning multi-spectral confocal microendoscope is presented. This new design improves the axial resolution while maintaining real-time imaging rates. In addition, the multi-point aperture geometry greatly reduces the effects of tissue scatter on imaging performance. Optical coherence tomography (OCT) has seen wide acceptance and FDA approval as a technique for ophthalmic retinal imaging, and has been adapted for endoscopic use. As a minimally invasive imaging technique, it provides morphological characteristics of tissues at a cellular level without requiring the use of exogenous fluorophores. OCT is capable of imaging deeper into biological tissue (˜1-2 mm) than confocal fluorescence microscopy. A theoretical analysis of the use of a fiber-bundle in spectral-domain OCT systems is presented. The fiber-bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the optical coherence tomography data. However, the multi-mode characteristic of the fibers in the fiber-bundle affects the depth sensitivity of the imaging system. A description of light interference in a multi-mode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis.

  7. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  8. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  9. Compact Micro-Imaging Spectrometer (CMIS): Investigation of Imaging Spectroscopy and Its Application to Mars Geology and Astrobiology

    NASA Technical Reports Server (NTRS)

    Staten, Paul W.

    2005-01-01

    Future missions to Mars will attempt to answer questions about Mars' geological and biological history. The goal of the CMIS project is to design, construct, and test a capable, multi-spectral micro-imaging spectrometer use in such missions. A breadboard instrument has been constructed with a micro-imaging camera and Several multi-wavelength LED illumination rings. Test samples have been chosen for their interest to spectroscopists, geologists and astrobiologists. Preliminary analysis has demonstrated the advantages of isotropic illumination and micro-imaging spectroscopy over spot spectroscopy.

  10. Segmentation of white rat sperm image

    NASA Astrophysics Data System (ADS)

    Bai, Weiguo; Liu, Jianguo; Chen, Guoyuan

    2011-11-01

    The segmentation of sperm image exerts a profound influence in the analysis of sperm morphology, which plays a significant role in the research of animals' infertility and reproduction. To overcome the microscope image's properties of low contrast and highly polluted noise, and to get better segmentation results of sperm image, this paper presents a multi-scale gradient operator combined with a multi-structuring element for the micro-spermatozoa image of white rat, as the multi-scale gradient operator can smooth the noise of an image, while the multi-structuring element can retain more shape details of the sperms. Then, we use the Otsu method to segment the modified gradient image whose gray scale processed is strong in sperms and weak in the background, converting it into a binary sperm image. As the obtained binary image owns impurities that are not similar with sperms in the shape, we choose a form factor to filter those objects whose form factor value is larger than the select critical value, and retain those objects whose not. And then, we can get the final binary image of the segmented sperms. The experiment shows this method's great advantage in the segmentation of the micro-spermatozoa image.

  11. Multi-scale textural feature extraction and particle swarm optimization based model selection for false positive reduction in mammography.

    PubMed

    Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin

    2015-12-01

    The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Automatic multi-label annotation of abdominal CT images using CBIR

    NASA Astrophysics Data System (ADS)

    Xue, Zhiyun; Antani, Sameer; Long, L. Rodney; Thoma, George R.

    2017-03-01

    We present a technique to annotate multiple organs shown in 2-D abdominal/pelvic CT images using CBIR. This annotation task is motivated by our research interests in visual question-answering (VQA). We aim to apply results from this effort in Open-iSM, a multimodal biomedical search engine developed by the National Library of Medicine (NLM). Understanding visual content of biomedical images is a necessary step for VQA. Though sufficient annotational information about an image may be available in related textual metadata, not all may be useful as descriptive tags, particularly for anatomy on the image. In this paper, we develop and evaluate a multi-label image annotation method using CBIR. We evaluate our method on two 2-D CT image datasets we generated from 3-D volumetric data obtained from a multi-organ segmentation challenge hosted in MICCAI 2015. Shape and spatial layout information is used to encode visual characteristics of the anatomy. We adapt a weighted voting scheme to assign multiple labels to the query image by combining the labels of the images identified as similar by the method. Key parameters that may affect the annotation performance, such as the number of images used in the label voting and the threshold for excluding labels that have low weights, are studied. The method proposes a coarse-to-fine retrieval strategy which integrates the classification with the nearest-neighbor search. Results from our evaluation (using the MICCAI CT image datasets as well as figures from Open-i) are presented.

  13. Analysis of Rapid Multi-Focal Zone ARFI Imaging

    PubMed Central

    Rosenzweig, Stephen; Palmeri, Mark; Nightingale, Kathryn

    2015-01-01

    Acoustic radiation force impulse (ARFI) imaging has shown promise for visualizing structure and pathology within multiple organs; however, because the contrast depends on the push beam excitation width, image quality suffers outside of the region of excitation. Multi-focal zone ARFI imaging has previously been used to extend the region of excitation (ROE), but the increased acquisition duration and acoustic exposure have limited its utility. Supersonic shear wave imaging has previously demonstrated that through technological improvements in ultrasound scanners and power supplies, it is possible to rapidly push at multiple locations prior to tracking displacements, facilitating extended depth of field shear wave sources. Similarly, ARFI imaging can utilize these same radiation force excitations to achieve tight pushing beams with a large depth of field. Finite element method simulations and experimental data are presented demonstrating that single- and rapid multi-focal zone ARFI have comparable image quality (less than 20% loss in contrast), but the multi-focal zone approach has an extended axial region of excitation. Additionally, as compared to single push sequences, the rapid multi-focal zone acquisitions improve the contrast to noise ratio by up to 40% in an example 4 mm diameter lesion. PMID:25643078

  14. [Object-oriented aquatic vegetation extracting approach based on visible vegetation indices.

    PubMed

    Jing, Ran; Deng, Lei; Zhao, Wen Ji; Gong, Zhao Ning

    2016-05-01

    Using the estimation of scale parameters (ESP) image segmentation tool to determine the ideal image segmentation scale, the optimal segmented image was created by the multi-scale segmentation method. Based on the visible vegetation indices derived from mini-UAV imaging data, we chose a set of optimal vegetation indices from a series of visible vegetation indices, and built up a decision tree rule. A membership function was used to automatically classify the study area and an aquatic vegetation map was generated. The results showed the overall accuracy of image classification using the supervised classification was 53.7%, and the overall accuracy of object-oriented image analysis (OBIA) was 91.7%. Compared with pixel-based supervised classification method, the OBIA method improved significantly the image classification result and further increased the accuracy of extracting the aquatic vegetation. The Kappa value of supervised classification was 0.4, and the Kappa value based OBIA was 0.9. The experimental results demonstrated that using visible vegetation indices derived from the mini-UAV data and OBIA method extracting the aquatic vegetation developed in this study was feasible and could be applied in other physically similar areas.

  15. Fast Detection of Airports on Remote Sensing Images with Single Shot MultiBox Detector

    NASA Astrophysics Data System (ADS)

    Xia, Fei; Li, HuiZhou

    2018-01-01

    This paper introduces a method for fast airport detection on remote sensing images (RSIs) using Single Shot MultiBox Detector (SSD). To our knowledge, this could be the first study which introduces an end-to-end detection model into airport detection on RSIs. Based on the common low-level features between natural images and RSIs, a convolution neural network trained on large amounts of natural images was transferred to tackle the airport detection problem with limited annotated data. To deal with the specific characteristics of RSIs, some related parameters in the SSD, such as the scales and layers, were modified for more accurate and rapider detection. The experiments show that the proposed method could achieve 83.5% Average Recall at 8 FPS on RSIs with the size of 1024*1024. In contrast to Faster R-CNN, an improvement on AP and speed could be obtained.

  16. Removing flicker based on sparse color correspondences in old film restoration

    NASA Astrophysics Data System (ADS)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  17. The Ring-Barking Experiment: Analysis of Forest Vitality Using Multi-Temporal Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Reichmuth, Anne; Bachmann, Martin; Heiden, Uta; Pinnel, Nicole; Holzwarth, Stefanie; Muller, Andreas; Henning, Lea; Einzmann, Kathrin; Immitzer, Markus; Seitz, Rudolf

    2016-08-01

    Through new operational optical spaceborne sensors (En- MAP and Sentinel-2) the impact analysis of climate change on forest ecosystems will be fostered. This analysis examines the potential of high spectral, spatial and temporal resolution data for detecting forest vegetation parameters, in particular Chlorophyll and Canopy Water content. The study site is a temperate spruce forest in Germany where in 2013 several trees were Ring-barked for a controlled die-off. During this experiment Ring- barked and Control trees were observed. Twelve airborne hyperspectral HySpex VNIR (Visible/Near Infrared) and SWIR (Shortwave Infrared) data with 1m spatial and 416 bands spectral resolution were acquired during the vegetation periods of 2013 and 2014. Additional laboratory spectral measurements of collected needle samples from Ring-barked and Control trees are available for needle level analysis. Index analysis of the laboratory measurements and image data are presented in this study.

  18. Multi-threshold de-noising of electrical imaging logging data based on the wavelet packet transform

    NASA Astrophysics Data System (ADS)

    Xie, Fang; Xiao, Chengwen; Liu, Ruilin; Zhang, Lili

    2017-08-01

    A key problem of effectiveness evaluation for fractured-vuggy carbonatite reservoir is how to accurately extract fracture and vug information from electrical imaging logging data. Drill bits quaked during drilling and resulted in rugged surfaces of borehole walls and thus conductivity fluctuations in electrical imaging logging data. The occurrence of the conductivity fluctuations (formation background noise) directly affects the fracture/vug information extraction and reservoir effectiveness evaluation. We present a multi-threshold de-noising method based on wavelet packet transform to eliminate the influence of rugged borehole walls. The noise is present as fluctuations in button-electrode conductivity curves and as pockmarked responses in electrical imaging logging static images. The noise has responses in various scales and frequency ranges and has low conductivity compared with fractures or vugs. Our de-noising method is to decompose the data into coefficients with wavelet packet transform on a quadratic spline basis, then shrink high-frequency wavelet packet coefficients in different resolutions with minimax threshold and hard-threshold function, and finally reconstruct the thresholded coefficients. We use electrical imaging logging data collected from fractured-vuggy Ordovician carbonatite reservoir in Tarim Basin to verify the validity of the multi-threshold de-noising method. Segmentation results and extracted parameters are shown as well to prove the effectiveness of the de-noising procedure.

  19. Fisher information and Cramér-Rao lower bound for experimental design in parallel imaging.

    PubMed

    Bouhrara, Mustapha; Spencer, Richard G

    2018-06-01

    The Cramér-Rao lower bound (CRLB) is widely used in the design of magnetic resonance (MR) experiments for parameter estimation. Previous work has considered only Gaussian or Rician noise distributions in this calculation. However, the noise distribution for multi-coil acquisitions, such as in parallel imaging, obeys the noncentral χ-distribution under many circumstances. The purpose of this paper is to present the CRLB calculation for parameter estimation from multi-coil acquisitions. We perform explicit calculations of Fisher matrix elements and the associated CRLB for noise distributions following the noncentral χ-distribution. The special case of diffusion kurtosis is examined as an important example. For comparison with analytic results, Monte Carlo (MC) simulations were conducted to evaluate experimental minimum standard deviations (SDs) in the estimation of diffusion kurtosis model parameters. Results were obtained for a range of signal-to-noise ratios (SNRs), and for both the conventional case of Gaussian noise distribution and noncentral χ-distribution with different numbers of coils, m. At low-to-moderate SNR, the noncentral χ-distribution deviates substantially from the Gaussian distribution. Our results indicate that this departure is more pronounced for larger values of m. As expected, the minimum SDs (i.e., CRLB) in derived diffusion kurtosis model parameters assuming a noncentral χ-distribution provided a closer match to the MC simulations as compared to the Gaussian results. Estimates of minimum variance for parameter estimation and experimental design provided by the CRLB must account for the noncentral χ-distribution of noise in multi-coil acquisitions, especially in the low-to-moderate SNR regime. Magn Reson Med 79:3249-3255, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Multi-echo GRE imaging of knee cartilage.

    PubMed

    Yuen, Joanna; Hung, Jachin; Wiggermann, Vanessa; Robinson, Simon D; McCormack, Robert; d'Entremont, Agnes G; Rauscher, Alexander

    2017-05-01

    To visualize healthy and abnormal articular cartilage, we investigated the potential of using the 3D multi-echo gradient echo (GRE) signal's magnitude and frequency and maps of T2* relaxation. After optimizing imaging parameters in five healthy volunteers, 3D multi-echo GRE magnetic resonance (MR) images were acquired at 3T in four patients with chondral damage prior to their arthroscopic surgery. Average magnitude and frequency information was extracted from the GRE images, and T2* maps were generated. Cartilage abnormalities were confirmed after arthroscopy and were graded using the Outerbridge classification scheme. Regions of interest were identified on average magnitude GRE images and compared to arthroscopy. All four patients presented with regions of Outerbridge Grade I and II cartilage damage on arthroscopy. One patient had Grade III changes. Grade I, II, and III changes were detectable on average magnitude and T2* maps, while Grade II and higher changes were also observable on MR frequency maps. For average magnitude images of healthy volunteers, the signal-to-noise ratio of the magnitude image averaged over three echoes was 4.26 ± 0.32, 12.26 ± 1.09, 14.31 ± 1.93, and 13.36 ± 1.13 in bone, femoral, tibial, and patellar cartilage, respectively. This proof-of-principle study demonstrates the feasibility of using different imaging contrasts from the 3D multi-echo GRE scan to visualize abnormalities of the articular cartilage. © 2016 International Society for Magnetic Resonance in Medicine Level of Evidence: 1 J. MAGN. RESON. IMAGING 2017;45:1502-1513. © 2016 International Society for Magnetic Resonance in Medicine.

  1. Automatic analysis of diabetic peripheral neuropathy using multi-scale quantitative morphology of nerve fibres in corneal confocal microscopy imaging.

    PubMed

    Dabbah, M A; Graham, J; Petropoulos, I N; Tavakoli, M; Malik, R A

    2011-10-01

    Diabetic peripheral neuropathy (DPN) is one of the most common long term complications of diabetes. Corneal confocal microscopy (CCM) image analysis is a novel non-invasive technique which quantifies corneal nerve fibre damage and enables diagnosis of DPN. This paper presents an automatic analysis and classification system for detecting nerve fibres in CCM images based on a multi-scale adaptive dual-model detection algorithm. The algorithm exploits the curvilinear structure of the nerve fibres and adapts itself to the local image information. Detected nerve fibres are then quantified and used as feature vectors for classification using random forest (RF) and neural networks (NNT) classifiers. We show, in a comparative study with other well known curvilinear detectors, that the best performance is achieved by the multi-scale dual model in conjunction with the NNT classifier. An evaluation of clinical effectiveness shows that the performance of the automated system matches that of ground-truth defined by expert manual annotation. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Multiattribute selection of acute stroke imaging software platform for Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) clinical trial.

    PubMed

    Churilov, Leonid; Liu, Daniel; Ma, Henry; Christensen, Soren; Nagakane, Yoshinari; Campbell, Bruce; Parsons, Mark W; Levi, Christopher R; Davis, Stephen M; Donnan, Geoffrey A

    2013-04-01

    The appropriateness of a software platform for rapid MRI assessment of the amount of salvageable brain tissue after stroke is critical for both the validity of the Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) Clinical Trial of stroke thrombolysis beyond 4.5 hours and for stroke patient care outcomes. The objective of this research is to develop and implement a methodology for selecting the acute stroke imaging software platform most appropriate for the setting of a multi-centre clinical trial. A multi-disciplinary decision making panel formulated the set of preferentially independent evaluation attributes. Alternative Multi-Attribute Value Measurement methods were used to identify the best imaging software platform followed by sensitivity analysis to ensure the validity and robustness of the proposed solution. Four alternative imaging software platforms were identified. RApid processing of PerfusIon and Diffusion (RAPID) software was selected as the most appropriate for the needs of the EXTEND trial. A theoretically grounded generic multi-attribute selection methodology for imaging software was developed and implemented. The developed methodology assured both a high quality decision outcome and a rational and transparent decision process. This development contributes to stroke literature in the area of comprehensive evaluation of MRI clinical software. At the time of evaluation, RAPID software presented the most appropriate imaging software platform for use in the EXTEND clinical trial. The proposed multi-attribute imaging software evaluation methodology is based on sound theoretical foundations of multiple criteria decision analysis and can be successfully used for choosing the most appropriate imaging software while ensuring both robust decision process and outcomes. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.

  3. Computer-aided diagnosis for osteoporosis using chest 3D CT images

    NASA Astrophysics Data System (ADS)

    Yoneda, K.; Matsuhiro, M.; Suzuki, H.; Kawata, Y.; Niki, N.; Nakano, Y.; Ohmatsu, H.; Kusumoto, M.; Tsuchida, T.; Eguchi, K.; Kaneko, M.

    2016-03-01

    The patients of osteoporosis comprised of about 13 million people in Japan and it is one of the problems the aging society has. In order to prevent the osteoporosis, it is necessary to do early detection and treatment. Multi-slice CT technology has been improving the three dimensional (3-D) image analysis with higher body axis resolution and shorter scan time. The 3-D image analysis using multi-slice CT images of thoracic vertebra can be used as a support to diagnose osteoporosis and at the same time can be used for lung cancer diagnosis which may lead to early detection. We develop automatic extraction and partitioning algorithm for spinal column by analyzing vertebral body structure, and the analysis algorithm of the vertebral body using shape analysis and a bone density measurement for the diagnosis of osteoporosis. Osteoporosis diagnosis support system obtained high extraction rate of the thoracic vertebral in both normal and low doses.

  4. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in two different conditions: static (control) and fluid shear stress. The proposed methodology exhibited higher sensitivity values and similar accuracy compared to state-of-the-art methods. PMID:27551746

  5. Missouri University Multi-Plane Imager (MUMPI): A high sensitivity rapid dynamic ECT brain imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Logan, K.W.; Holmes, R.A.

    1984-01-01

    The authors have designed a unique ECT imaging device that can record rapid dynamic images of brain perfusion. The Missouri University Multi-Plane Imager (MUMPI) uses a single crystal detector that produces four orthogonal two-dimensional images simultaneously. Multiple slice images are reconstructed from counts recorded from stepwise or continuous collimator rotation. Four simultaneous 2-d image fields may also be recorded and reviewed. The cylindrical sodium iodide crystal and the rotating collimator concentrically surround the source volume being imaged with the collimator the only moving part. The design and function parameters of MUMPI have been compared to other competitive tomographic head imagingmore » devices. MUMPI's principal advantages are: 1) simultaneous direct acquisition of four two-dimensional images; 2) extremely rapid project set acquisition for ECT reconstruction; and 3) instrument practicality and economy due to single detector design and the absence of heavy mechanical moving components (only collimator rotation is required). MUMPI should be ideal for imaging neutral lipophilic chelates such as Tc-99m-PnAO which passively diffuses across the intact blood-brain-barrier and rapidly clears from brain tissue.« less

  6. Using aerial images for establishing a workflow for the quantification of water management measures

    NASA Astrophysics Data System (ADS)

    Leuschner, Annette; Merz, Christoph; van Gasselt, Stephan; Steidl, Jörg

    2017-04-01

    Quantified landscape characteristics, such as morphology, land use or hydrological conditions, play an important role for hydrological investigations as landscape parameters directly control the overall water balance. A powerful assimilation and geospatial analysis of remote sensing datasets in combination with hydrological modeling allows to quantify landscape parameters and water balances efficiently. This study focuses on the development of a workflow to extract hydrologically relevant data from aerial image datasets and derived products in order to allow an effective parametrization of a hydrological model. Consistent and self-contained data source are indispensable for achieving reasonable modeling results. In order to minimize uncertainties and inconsistencies, input parameters for modeling should be extracted from one remote-sensing dataset mainly if possbile. Here, aerial images have been chosen because of their high spatial and spectral resolution that permits the extraction of various model relevant parameters, like morphology, land-use or artificial drainage-systems. The methodological repertoire to extract environmental parameters range from analyses of digital terrain models, multispectral classification and segmentation of land use distribution maps and mapping of artificial drainage-systems based on spectral and visual inspection. The workflow has been tested for a mesoscale catchment area which forms a characteristic hydrological system of a young moraine landscape located in the state of Brandenburg, Germany. These dataset were used as input-dataset for multi-temporal hydrological modelling of water balances to detect and quantify anthropogenic and meteorological impacts. ArcSWAT, as a GIS-implemented extension and graphical user input interface for the Soil Water Assessment Tool (SWAT) was chosen. The results of this modeling approach provide the basis for anticipating future development of the hydrological system, and regarding system changes for the adaption of water resource management decisions.

  7. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    PubMed

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  8. Accuracy of DSM based on digital aerial image matching. (Polish Title: Dokładność NMPT tworzonego metodą automatycznego dopasowania cyfrowych zdjęć lotniczych)

    NASA Astrophysics Data System (ADS)

    Kubalska, J. L.; Preuss, R.

    2013-12-01

    Digital Surface Models (DSM) are used in GIS data bases as single product more often. They are also necessary to create other products such as3D city models, true-ortho and object-oriented classification. This article presents results of DSM generation for classification of vegetation in urban areas. Source data allowed producing DSM with using of image matching method and ALS data. The creation of DSM from digital images, obtained by Ultra Cam-D digital Vexcel camera, was carried out in Match-T by INPHO. This program optimizes the configuration of images matching process, which ensures high accuracy and minimize gap areas. The analysis of the accuracy of this process was made by comparison of DSM generated in Match-T with DSM generated from ALS data. Because of further purpose of generated DSM it was decided to create model in GRID structure with cell size of 1 m. With this parameter differential model from both DSMs was also built that allowed determining the relative accuracy of the compared models. The analysis indicates that the generation of DSM with multi-image matching method is competitive for the same surface model creation from ALS data. Thus, when digital images with high overlap are available, the additional registration of ALS data seems to be unnecessary.

  9. Multiscale analysis of the correlation of processing parameters on viscidity of composites fabricated by automated fiber placement

    NASA Astrophysics Data System (ADS)

    Han, Zhenyu; Sun, Shouzheng; Fu, Yunzhong; Fu, Hongya

    2017-10-01

    Viscidity is an important physical indicator for assessing fluidity of resin that is beneficial to contact resin with the fibers effectively and reduce manufacturing defects during automated fiber placement (AFP) process. However, the effect of processing parameters on viscidity evolution is rarely studied during AFP process. In this paper, viscidities under different scales are analyzed based on multi-scale analysis method. Firstly, viscous dissipation energy (VDE) within meso-unit under different processing parameters is assessed by using finite element method (FEM). According to multi-scale energy transfer model, meso-unit energy is used as the boundary condition for microscopic analysis. Furthermore, molecular structure of micro-system is built by molecular dynamics (MD) method. And viscosity curves are then obtained by integrating stress autocorrelation function (SACF) with time. Finally, the correlation characteristics of processing parameters to viscosity are revealed by using gray relational analysis method (GRAM). A group of processing parameters is found out to achieve the stability of viscosity and better fluidity of resin.

  10. Dynamic clustering detection through multi-valued descriptors of dermoscopic images.

    PubMed

    Cozza, Valentina; Guarracino, Maria Rosario; Maddalena, Lucia; Baroni, Adone

    2011-09-10

    This paper introduces a dynamic clustering methodology based on multi-valued descriptors of dermoscopic images. The main idea is to support medical diagnosis to decide if pigmented skin lesions belonging to an uncertain set are nearer to malignant melanoma or to benign nevi. Melanoma is the most deadly skin cancer, and early diagnosis is a current challenge for clinicians. Most data analysis algorithms for skin lesions discrimination focus on segmentation and extraction of features of categorical or numerical type. As an alternative approach, this paper introduces two new concepts: first, it considers multi-valued data that scalar variables not only describe but also intervals or histogram variables; second, it introduces a dynamic clustering method based on Wasserstein distance to compare multi-valued data. The overall strategy of analysis can be summarized into the following steps: first, a segmentation of dermoscopic images allows to identify a set of multi-valued descriptors; second, we performed a discriminant analysis on a set of images where there is an a priori classification so that it is possible to detect which features discriminate the benign and malignant lesions; and third, we performed the proposed dynamic clustering method on the uncertain cases, which need to be associated to one of the two previously mentioned groups. Results based on clinical data show that the grading of specific descriptors associated to dermoscopic characteristics provides a novel way to characterize uncertain lesions that can help the dermatologist's diagnosis. Copyright © 2011 John Wiley & Sons, Ltd.

  11. DCE-MRI, DW-MRI, and MRS in Cancer: Challenges and Advantages of Implementing Qualitative and Quantitative Multi-parametric Imaging in the Clinic

    PubMed Central

    Winfield, Jessica M.; Payne, Geoffrey S.; Weller, Alex; deSouza, Nandita M.

    2016-01-01

    Abstract Multi-parametric magnetic resonance imaging (mpMRI) offers a unique insight into tumor biology by combining functional MRI techniques that inform on cellularity (diffusion-weighted MRI), vascular properties (dynamic contrast-enhanced MRI), and metabolites (magnetic resonance spectroscopy) and has scope to provide valuable information for prognostication and response assessment. Challenges in the application of mpMRI in the clinic include the technical considerations in acquiring good quality functional MRI data, development of robust techniques for analysis, and clinical interpretation of the results. This article summarizes the technical challenges in acquisition and analysis of multi-parametric MRI data before reviewing the key applications of multi-parametric MRI in clinical research and practice. PMID:27748710

  12. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    PubMed Central

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634

  13. Extended Field Laser Confocal Microscopy (EFLCM): combining automated Gigapixel image capture with in silico virtual microscopy.

    PubMed

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-07-16

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.

  14. Image quality phantom and parameters for high spatial resolution small-animal SPECT

    NASA Astrophysics Data System (ADS)

    Visser, Eric P.; Harteveld, Anita A.; Meeuwis, Antoi P. W.; Disselhorst, Jonathan A.; Beekman, Freek J.; Oyen, Wim J. G.; Boerman, Otto C.

    2011-10-01

    At present, generally accepted standards to characterize small-animal single photon emission tomographs (SPECT) do not exist. Whereas for small-animal positron emission tomography (PET), the NEMA NU 4-2008 guidelines are available, such standards are still lacking for small-animal SPECT. More specifically, a dedicated image quality (IQ) phantom and corresponding IQ parameters are absent. The structures of the existing PET IQ phantom are too large to fully characterize the sub-millimeter spatial resolution of modern multi-pinhole SPECT scanners, and its diameter will not fit into all scanners when operating in high spatial resolution mode. We therefore designed and constructed an adapted IQ phantom with smaller internal structures and external diameter, and a facility to guarantee complete filling of the smallest rods. The associated IQ parameters were adapted from NEMA NU 4. An additional parameter, effective whole-body sensitivity, was defined since this was considered relevant in view of the variable size of the field of view and the use of multiple bed positions as encountered in modern small-animal SPECT scanners. The usefulness of the phantom was demonstrated for 99mTc in a USPECT-II scanner operated in whole-body scanning mode using a multi-pinhole mouse collimator with 0.6 mm pinhole diameter.

  15. Evaluation of Renal Oxygenation Level Changes after Water Loading Using Susceptibility-Weighted Imaging and T2* Mapping.

    PubMed

    Ding, Jiule; Xing, Wei; Wu, Dongmei; Chen, Jie; Pan, Liang; Sun, Jun; Xing, Shijun; Dai, Yongming

    2015-01-01

    To assess the feasibility of susceptibility-weighted imaging (SWI) while monitoring changes in renal oxygenation level after water loading. Thirty-two volunteers (age, 28.0 ± 2.2 years) were enrolled in this study. SWI and multi-echo gradient echo sequence-based T2(*) mapping were used to cover the kidney before and after water loading. Cortical and medullary parameters were measured using small regions of interest, and their relative changes due to water loading were calculated based on baseline and post-water loading data. An intraclass correlation coefficient analysis was used to assess inter-observer reliability of each parameter. A receiver operating characteristic curve analysis was conducted to compare the performance of the two methods for detecting renal oxygenation changes due to water loading. Both medullary phase and medullary T2(*) values increased after water loading (p < 0.001), although poor correlations were found between the phase changes and the T2(*) changes (p > 0.05). Interobserver reliability was excellent for the T2(*) values, good for SWI cortical phase values, and moderate for the SWI medullary phase values. The area under receiver operating characteristic curve of the SWI medullary phase values was 0.85 and was not different from the medullary T2(*) value (0.84). Susceptibility-weighted imaging enabled monitoring changes in the oxygenation level in the medulla after water loading, and may allow comparable feasibility to detect renal oxygenation level changes due to water loading compared with that of T2(*) mapping.

  16. SU-E-J-109: Accurate Contour Transfer Between Different Image Modalities Using a Hybrid Deformable Image Registration and Fuzzy Connected Image Segmentation Method.

    PubMed

    Yang, C; Paulson, E; Li, X

    2012-06-01

    To develop and evaluate a tool that can improve the accuracy of contour transfer between different image modalities under challenging conditions of low image contrast and large image deformation, comparing to a few commonly used methods, for radiation treatment planning. The software tool includes the following steps and functionalities: (1) accepting input of images of different modalities, (2) converting existing contours on reference images (e.g., MRI) into delineated volumes and adjusting the intensity within the volumes to match target images (e.g., CT) intensity distribution for enhanced similarity metric, (3) registering reference and target images using appropriate deformable registration algorithms (e.g., B-spline, demons) and generate deformed contours, (4) mapping the deformed volumes on target images, calculating mean, variance, and center of mass as the initialization parameters for consecutive fuzzy connectedness (FC) image segmentation on target images, (5) generate affinity map from FC segmentation, (6) achieving final contours by modifying the deformed contours using the affinity map with a gradient distance weighting algorithm. The tool was tested with the CT and MR images of four pancreatic cancer patients acquired at the same respiration phase to minimize motion distortion. Dice's Coefficient was calculated against direct delineation on target image. Contours generated by various methods, including rigid transfer, auto-segmentation, deformable only transfer and proposed method, were compared. Fuzzy connected image segmentation needs careful parameter initialization and user involvement. Automatic contour transfer by multi-modality deformable registration leads up to 10% of accuracy improvement over the rigid transfer. Two extra proposed steps of adjusting intensity distribution and modifying the deformed contour with affinity map improve the transfer accuracy further to 14% averagely. Deformable image registration aided by contrast adjustment and fuzzy connectedness segmentation improves the contour transfer accuracy between multi-modality images, particularly with large deformation and low image contrast. © 2012 American Association of Physicists in Medicine.

  17. Comparative data mining analysis for information retrieval of MODIS images: monitoring lake turbidity changes at Lake Okeechobee, Florida

    NASA Astrophysics Data System (ADS)

    Chang, Ni-Bin; Daranpob, Ammarin; Yang, Y. Jeffrey; Jin, Kang-Ren

    2009-09-01

    In the remote sensing field, a frequently recurring question is: Which computational intelligence or data mining algorithms are most suitable for the retrieval of essential information given that most natural systems exhibit very high non-linearity. Among potential candidates might be empirical regression, neural network model, support vector machine, genetic algorithm/genetic programming, analytical equation, etc. This paper compares three types of data mining techniques, including multiple non-linear regression, artificial neural networks, and genetic programming, for estimating multi-temporal turbidity changes following hurricane events at Lake Okeechobee, Florida. This retrospective analysis aims to identify how the major hurricanes impacted the water quality management in 2003-2004. The Moderate Resolution Imaging Spectroradiometer (MODIS) Terra 8-day composite imageries were used to retrieve the spatial patterns of turbidity distributions for comparison against the visual patterns discernible in the in-situ observations. By evaluating four statistical parameters, the genetic programming model was finally selected as the most suitable data mining tool for classification in which the MODIS band 1 image and wind speed were recognized as the major determinants by the model. The multi-temporal turbidity maps generated before and after the major hurricane events in 2003-2004 showed that turbidity levels were substantially higher after hurricane episodes. The spatial patterns of turbidity confirm that sediment-laden water travels to the shore where it reduces the intensity of the light necessary to submerged plants for photosynthesis. This reduction results in substantial loss of biomass during the post-hurricane period.

  18. Self-Organizing-Map Program for Analyzing Multivariate Data

    NASA Technical Reports Server (NTRS)

    Li, P. Peggy; Jacob, Joseph C.; Block, Gary L.; Braverman, Amy J.

    2005-01-01

    SOM_VIS is a computer program for analysis and display of multidimensional sets of Earth-image data typified by the data acquired by the Multi-angle Imaging Spectro-Radiometer [MISR (a spaceborne instrument)]. In SOM_VIS, an enhanced self-organizing-map (SOM) algorithm is first used to project a multidimensional set of data into a nonuniform three-dimensional lattice structure. The lattice structure is mapped to a color space to obtain a color map for an image. The Voronoi cell-refinement algorithm is used to map the SOM lattice structure to various levels of color resolution. The final result is a false-color image in which similar colors represent similar characteristics across all its data dimensions. SOM_VIS provides a control panel for selection of a subset of suitably preprocessed MISR radiance data, and a control panel for choosing parameters to run SOM training. SOM_VIS also includes a component for displaying the false-color SOM image, a color map for the trained SOM lattice, a plot showing an original input vector in 36 dimensions of a selected pixel from the SOM image, the SOM vector that represents the input vector, and the Euclidean distance between the two vectors.

  19. Exploiting physical constraints for multi-spectral exo-planet detection

    NASA Astrophysics Data System (ADS)

    Thiébaut, Éric; Devaney, Nicholas; Langlois, Maud; Hanley, Kenneth

    2016-07-01

    We derive a physical model of the on-axis PSF for a high contrast imaging system such as GPI or SPHERE. This model is based on a multi-spectral Taylor series expansion of the diffraction pattern and predicts that the speckles should be a combination of spatial modes with deterministic chromatic magnification and weighting. We propose to remove most of the residuals by fitting this model on a set of images at multiple wavelengths and times. On simulated data, we demonstrate that our approach achieves very good speckle suppression without additional heuristic parameters. The residual speckles1, 2 set the most serious limitation in the detection of exo-planets in high contrast coronographic images provided by instruments such as SPHERE3 at the VLT, GPI4, 5 at Gemini, or SCExAO6 at Subaru. A number of post-processing methods have been proposed to remove as much as possible of the residual speckles while preserving the signal from the planets. These methods exploit the fact that the speckles and the planetary signal have different temporal and spectral behaviors. Some methods like LOCI7 are based on angular differential imaging8 (ADI), spectral differential imaging9, 10 (SDI), or on a combination of ADI and SDI.11 Instead of working on image differences, we propose to tackle the exo-planet detection as an inverse problem where a model of the residual speckles is fit on the set of multi-spectral images and, possibly, multiple exposures. In order to reduce the number of degrees of freedom, we impose specific constraints on the spatio-spectral distribution of stellar speckles. These constraints are deduced from a multi-spectral Taylor series expansion of the diffraction pattern for an on-axis source which implies that the speckles are a combination of spatial modes with deterministic chromatic magnification and weighting. Using simulated data, the efficiency of speckle removal by fitting the proposed multi-spectral model is compared to the result of using an approximation based on the singular value decomposition of the rescaled images. We show how the difficult problem to fitting a bilinear model on the can be solved in practise. The results are promising for further developments including application to real data and joint planet detection in multi-variate data (multi-spectral and multiple exposures images).

  20. Development of detailed design concepts for the EarthCARE multi-spectral imager

    NASA Astrophysics Data System (ADS)

    Lobb, Dan; Escadero, Isabel; Chang, Mark; Gode, Sophie

    2017-11-01

    The EarthCARE mission is dedicated to the study of clouds by observations from a satellite in low Earth orbit. The payload will include major radar and LIDAR instruments, supported by a multi-spectral imager (MSI) and a broadband radiometer. The paper describes development of detailed design concepts for the MSI, and analysis of critical performance parameters. The MSI will form Earth images at 500m ground sample distance (GSD) over a swath width of 150km, from a nominal platform altitude of around 400km. The task of the MSI is to provide spatial context for the single-point measurements made by the radar and LIDAR systems; it will image Earth in 7 spectral bands: one visible, one near-IR, two short-wave IR and three thermal IR. The MSI instrument will be formed in two parts: a visible-NIR-SWIR (VNS) system, radiometrically calibrated using a sunilluminated diffuser, and a thermal IR (TIR) system calibrated using cold space and an internal black-body. The VNS system will perform push-broom imaging, using linear array detectors (silicon and InGaAs) and 4 separate lenses. The TIR system will use a microbolometer array detector in a time delay and integration (TDI) mode. Critical issues discussed for the VNS system include detector selection and detailed optical design trade-offs. The latter are related to the desirability of dichroics to achieve a common aperture, which influences the calibration hardware and lens design. The TIR system's most significant problems relate to control of random noise and bias errors, requiring optimisation of detector operation and calibration procedures.

  1. Robust detection of multiple sclerosis lesions from intensity-normalized multi-channel MRI

    NASA Astrophysics Data System (ADS)

    Karpate, Yogesh; Commowick, Olivier; Barillot, Christian

    2015-03-01

    Multiple sclerosis (MS) is a disease with heterogeneous evolution among the patients. Quantitative analysis of longitudinal Magnetic Resonance Images (MRI) provides a spatial analysis of the brain tissues which may lead to the discovery of biomarkers of disease evolution. Better understanding of the disease will lead to a better discovery of pathogenic mechanisms, allowing for patient-adapted therapeutic strategies. To characterize MS lesions, we propose a novel paradigm to detect white matter lesions based on a statistical framework. It aims at studying the benefits of using multi-channel MRI to detect statistically significant differences between each individual MS patient and a database of control subjects. This framework consists in two components. First, intensity standardization is conducted to minimize the inter-subject intensity difference arising from variability of the acquisition process and different scanners. The intensity normalization maps parameters obtained using a robust Gaussian Mixture Model (GMM) estimation not affected by the presence of MS lesions. The second part studies the comparison of multi-channel MRI of MS patients with respect to an atlas built from the control subjects, thereby allowing us to look for differences in normal appearing white matter, in and around the lesions of each patient. Experimental results demonstrate that our technique accurately detects significant differences in lesions consequently improving the results of MS lesion detection.

  2. Enhanced multi-protocol analysis via intelligent supervised embedding (EMPrAvISE): detecting prostate cancer on multi-parametric MRI

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Bloch, B. Nicholas; Chappelow, Jonathan; Patel, Pratik; Rofsky, Neil; Lenkinski, Robert; Genega, Elizabeth; Madabhushi, Anant

    2011-03-01

    Currently, there is significant interest in developing methods for quantitative integration of multi-parametric (structural, functional) imaging data with the objective of building automated meta-classifiers to improve disease detection, diagnosis, and prognosis. Such techniques are required to address the differences in dimensionalities and scales of individual protocols, while deriving an integrated multi-parametric data representation which best captures all disease-pertinent information available. In this paper, we present a scheme called Enhanced Multi-Protocol Analysis via Intelligent Supervised Embedding (EMPrAvISE); a powerful, generalizable framework applicable to a variety of domains for multi-parametric data representation and fusion. Our scheme utilizes an ensemble of embeddings (via dimensionality reduction, DR); thereby exploiting the variance amongst multiple uncorrelated embeddings in a manner similar to ensemble classifier schemes (e.g. Bagging, Boosting). We apply this framework to the problem of prostate cancer (CaP) detection on 12 3 Tesla pre-operative in vivo multi-parametric (T2-weighted, Dynamic Contrast Enhanced, and Diffusion-weighted) magnetic resonance imaging (MRI) studies, in turn comprising a total of 39 2D planar MR images. We first align the different imaging protocols via automated image registration, followed by quantification of image attributes from individual protocols. Multiple embeddings are generated from the resultant high-dimensional feature space which are then combined intelligently to yield a single stable solution. Our scheme is employed in conjunction with graph embedding (for DR) and probabilistic boosting trees (PBTs) to detect CaP on multi-parametric MRI. Finally, a probabilistic pairwise Markov Random Field algorithm is used to apply spatial constraints to the result of the PBT classifier, yielding a per-voxel classification of CaP presence. Per-voxel evaluation of detection results against ground truth for CaP extent on MRI (obtained by spatially registering pre-operative MRI with available whole-mount histological specimens) reveals that EMPrAvISE yields a statistically significant improvement (AUC=0.77) over classifiers constructed from individual protocols (AUC=0.62, 0.62, 0.65, for T2w, DCE, DWI respectively) as well as one trained using multi-parametric feature concatenation (AUC=0.67).

  3. Multi-image mosaic with SIFT and vision measurement for microscale structures processed by femtosecond laser

    NASA Astrophysics Data System (ADS)

    Wang, Fu-Bin; Tu, Paul; Wu, Chen; Chen, Lei; Feng, Ding

    2018-01-01

    In femtosecond laser processing, the field of view of each image frame of the microscale structure is extremely small. In order to obtain the morphology of the whole microstructure, a multi-image mosaic with partially overlapped regions is required. In the present work, the SIFT algorithm for mosaic images was analyzed theoretically, and by using multiple images of a microgroove structure processed by femtosecond laser, a stitched image of the whole groove structure could be studied experimentally and realized. The object of our research concerned a silicon wafer with a microgroove structure ablated by femtosecond laser. First, we obtained microgrooves at a width of 380 μm at different depths. Second, based on the gray image of the microgroove, a multi-image mosaic with slot width and slot depth was realized. In order to improve the image contrast between the target and the background, and taking the slot depth image as an example, a multi-image mosaic was then realized using pseudo color enhancement. Third, in order to measure the structural size of the microgroove with the image, a known width streak ablated by femtosecond laser at 20 mW was used as a calibration sample. Through edge detection, corner extraction, and image correction for the streak images, we calculated the pixel width of the streak image and found the measurement ratio constant Kw in the width direction, and then obtained the proportional relationship between a pixel and a micrometer. Finally, circular spot marks ablated by femtosecond laser at 2 mW and 15 mW were used as test images, and proving that the value Kw was correct, the measurement ratio constant Kh in the height direction was obtained, and the image measurements for a microgroove of 380 × 117 μm was realized based on a measurement ratio constant Kw and Kh. The research and experimental results show that the image mosaic, image calibration, and geometric image parameter measurements for the microstructural image ablated by femtosecond laser were realized effectively.

  4. MRI-based biomechanical parameters for carotid artery plaque vulnerability assessment.

    PubMed

    Speelman, Lambert; Teng, Zhongzhao; Nederveen, Aart J; van der Lugt, Aad; Gillard, Jonathan H

    2016-03-01

    Carotid atherosclerotic plaques are a major cause of ischaemic stroke. The biomechanical environment to which the arterial wall and plaque is subjected to plays an important role in the initiation, progression and rupture of carotid plaques. MRI is frequently used to characterize the morphology of a carotid plaque, but new developments in MRI enable more functional assessment of carotid plaques. In this review, MRI based biomechanical parameters are evaluated on their current status, clinical applicability, and future developments. Blood flow related biomechanical parameters, including endothelial wall shear stress and oscillatory shear index, have been shown to be related to plaque formation. Deriving these parameters directly from MRI flow measurements is feasible and has great potential for future carotid plaque development prediction. Blood pressure induced stresses in a plaque may exceed the tissue strength, potentially leading to plaque rupture. Multi-contrast MRI based stress calculations in combination with tissue strength assessment based on MRI inflammation imaging may provide a plaque stress-strength balance that can be used to assess the plaque rupture risk potential. Direct plaque strain analysis based on dynamic MRI is already able to identify local plaque displacement during the cardiac cycle. However, clinical evidence linking MRI strain to plaque vulnerability is still lacking. MRI based biomechanical parameters may lead to improved assessment of carotid plaque development and rupture risk. However, better MRI systems and faster sequences are required to improve the spatial and temporal resolution, as well as increase the image contrast and signal-to-noise ratio.

  5. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  6. RAMTaB: Robust Alignment of Multi-Tag Bioimages

    PubMed Central

    Raza, Shan-e-Ahmed; Humayun, Ahmad; Abouna, Sylvie; Nattkemper, Tim W.; Epstein, David B. A.; Khan, Michael; Rajpoot, Nasir M.

    2012-01-01

    Background In recent years, new microscopic imaging techniques have evolved to allow us to visualize several different proteins (or other biomolecules) in a visual field. Analysis of protein co-localization becomes viable because molecules can interact only when they are located close to each other. We present a novel approach to align images in a multi-tag fluorescence image stack. The proposed approach is applicable to multi-tag bioimaging systems which (a) acquire fluorescence images by sequential staining and (b) simultaneously capture a phase contrast image corresponding to each of the fluorescence images. To the best of our knowledge, there is no existing method in the literature, which addresses simultaneous registration of multi-tag bioimages and selection of the reference image in order to maximize the overall overlap between the images. Methodology/Principal Findings We employ a block-based method for registration, which yields a confidence measure to indicate the accuracy of our registration results. We derive a shift metric in order to select the Reference Image with Maximal Overlap (RIMO), in turn minimizing the total amount of non-overlapping signal for a given number of tags. Experimental results show that the Robust Alignment of Multi-Tag Bioimages (RAMTaB) framework is robust to variations in contrast and illumination, yields sub-pixel accuracy, and successfully selects the reference image resulting in maximum overlap. The registration results are also shown to significantly improve any follow-up protein co-localization studies. Conclusions For the discovery of protein complexes and of functional protein networks within a cell, alignment of the tag images in a multi-tag fluorescence image stack is a key pre-processing step. The proposed framework is shown to produce accurate alignment results on both real and synthetic data. Our future work will use the aligned multi-channel fluorescence image data for normal and diseased tissue specimens to analyze molecular co-expression patterns and functional protein networks. PMID:22363510

  7. Optical design of multi-multiple expander structure of laser gas analysis and measurement device

    NASA Astrophysics Data System (ADS)

    Fu, Xiang; Wei, Biao

    2018-03-01

    The installation and debugging of optical circuit structure in the application of carbon monoxide distributed laser gas analysis and measurement, there are difficult key technical problems. Based on the three-component expansion theory, multi-multiple expander structure with expansion ratio of 4, 5, 6 and 7 is adopted in the absorption chamber to enhance the adaptability of the installation environment of the gas analysis and measurement device. According to the basic theory of aberration, the optimal design of multi-multiple beam expander structure is carried out. By using image quality evaluation method, the difference of image quality under different magnifications is analyzed. The results show that the optical quality of the optical system with the expanded beam structure is the best when the expansion ratio is 5-7.

  8. Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition

    PubMed Central

    Ong, Frank; Lustig, Michael

    2016-01-01

    We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978

  9. Medical Image Retrieval Using Multi-Texton Assignment.

    PubMed

    Tang, Qiling; Yang, Jirong; Xia, Xianfu

    2018-02-01

    In this paper, we present a multi-texton representation method for medical image retrieval, which utilizes the locality constraint to encode each filter bank response within its local-coordinate system consisting of the k nearest neighbors in texton dictionary and subsequently employs spatial pyramid matching technique to implement feature vector representation. Comparison with the traditional nearest neighbor assignment followed by texton histogram statistics method, our strategies reduce the quantization errors in mapping process and add information about the spatial layout of texton distributions and, thus, increase the descriptive power of the image representation. We investigate the effects of different parameters on system performance in order to choose the appropriate ones for our datasets and carry out experiments on the IRMA-2009 medical collection and the mammographic patch dataset. The extensive experimental results demonstrate that the proposed method has superior performance.

  10. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.

    PubMed

    Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.

  11. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines

    PubMed Central

    Mikut, Ralf; Reischl, Markus

    2016-01-01

    The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213

  12. A Radiative Analysis of Angular Signatures and Oblique Radiance Retrievals over the Polar Regions from the Multi-Angle Imaging Spectroradiometer

    ERIC Educational Resources Information Center

    Wilson, Michael Jason

    2009-01-01

    This dissertation studies clouds over the polar regions using the Multi-angle Imaging SpectroRadiometer (MISR) on-board EOS-Terra. Historically, low thin clouds have been problematic for satellite detection, because these clouds have similar brightness and temperature properties to the surface they overlay. However, the oblique angles of MISR…

  13. Automated road network extraction from high spatial resolution multi-spectral imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Qiaoping

    For the last three decades, the Geomatics Engineering and Computer Science communities have considered automated road network extraction from remotely-sensed imagery to be a challenging and important research topic. The main objective of this research is to investigate the theory and methodology of automated feature extraction for image-based road database creation, refinement or updating, and to develop a series of algorithms for road network extraction from high resolution multi-spectral imagery. The proposed framework for road network extraction from multi-spectral imagery begins with an image segmentation using the k-means algorithm. This step mainly concerns the exploitation of the spectral information for feature extraction. The road cluster is automatically identified using a fuzzy classifier based on a set of predefined road surface membership functions. These membership functions are established based on the general spectral signature of road pavement materials and the corresponding normalized digital numbers on each multi-spectral band. Shape descriptors of the Angular Texture Signature are defined and used to reduce the misclassifications between roads and other spectrally similar objects (e.g., crop fields, parking lots, and buildings). An iterative and localized Radon transform is developed for the extraction of road centerlines from the classified images. The purpose of the transform is to accurately and completely detect the road centerlines. It is able to find short, long, and even curvilinear lines. The input image is partitioned into a set of subset images called road component images. An iterative Radon transform is locally applied to each road component image. At each iteration, road centerline segments are detected based on an accurate estimation of the line parameters and line widths. Three localization approaches are implemented and compared using qualitative and quantitative methods. Finally, the road centerline segments are grouped into a road network. The extracted road network is evaluated against a reference dataset using a line segment matching algorithm. The entire process is unsupervised and fully automated. Based on extensive experimentation on a variety of remotely-sensed multi-spectral images, the proposed methodology achieves a moderate success in automating road network extraction from high spatial resolution multi-spectral imagery.

  14. The iMars WebGIS - Spatio-Temporal Data Queries and Single Image Map Web Services

    NASA Astrophysics Data System (ADS)

    Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Muller, Jan-Peter; van Gasselt, Stephan; Sidiropoulos, Panagiotis; Lanz-Kroechert, Julia

    2017-04-01

    Introduction: Web-based planetary image dissemination platforms usually show outline coverages of the data and offer querying for metadata as well as preview and download, e.g. the HRSC Mapserver (Walter & van Gasselt, 2014). Here we introduce a new approach for a system dedicated to change detection by simultanous visualisation of single-image time series in a multi-temporal context. While the usual form of presenting multi-orbit datasets is the merge of the data into a larger mosaic, we want to stay with the single image as an important snapshot of the planetary surface at a specific time. In the context of the EU FP-7 iMars project we process and ingest vast amounts of automatically co-registered (ACRO) images. The base of the co-registration are the high precision HRSC multi-orbit quadrangle image mosaics, which are based on bundle-block-adjusted multi-orbit HRSC DTMs. Additionally we make use of the existing bundle-adjusted HRSC single images available at the PDS archives. A prototype demonstrating the presented features is available at http://imars.planet.fu-berlin.de. Multi-temporal database: In order to locate multiple coverage of images and select images based on spatio-temporal queries, we converge available coverage catalogs for various NASA imaging missions into a relational database management system with geometry support. We harvest available metadata entries during our processing pipeline using the Integrated Software for Imagers and Spectrometers (ISIS) software. Currently, this database contains image outlines from the MGS/MOC, MRO/CTX and the MO/THEMIS instruments with imaging dates ranging from 1996 to the present. For the MEx/HRSC data, we already maintain a database which we automatically update with custom software based on the VICAR environment. Web Map Service with time support: The MapServer software is connected to the database and provides Web Map Services (WMS) with time support based on the START_TIME image attribute. It allows temporal WMS GetMap requests by setting additional TIME parameter values in the request. The values for the parameter represent an interval defined by its lower and upper bounds. As the WMS time standard only supports one time variable, only the start times of the images are considered. If no time values are submitted with the request, the full time range of all images is assumed as the default. Dynamic single image WMS: To compare images from different acquisition times at sites of multiple coverage, we have to load every image as a single WMS layer. Due to the vast amount of single images we need a way to set up the layers in a dynamic way - the map server does not know the images to be served beforehand. We use the MapScript interface to dynamically access MapServer's objects and configure the file name and path of the requested image in the map configuration. The layers are created on-the-fly each representing only one single image. On the frontend side, the vendor-specific WMS request parameter (PRODUCTID) has to be appended to the regular set of WMS parameters. The request is then passed on to the MapScript instance. Web Map Tile Cache: In order to speed up access of the WMS requests, a MapCache instance has been integrated in the pipeline. As it is not aware of the available PDS product IDs which will be queried, the PRODUCTID parameter is configured as an additional dimension of the cache. The WMS request is received by the Apache webserver configured with the MapCache module. If the tile is available in the tile cache, it is immediately commited to the client. If not available, the tile request is forwarded to Apache and the MapScript module. The Python script intercepts the WMS request and extracts the product ID from the parameter chain. It loads the layer object from the map file and appends the file name and path of the inquired image. After some possible further image processing inside the script (stretching, color matching), the request is submitted to the MapServer backend which in turn delivers the response back to the MapCache instance. Web frontend: We have implemented a web-GIS frontend based on various OpenLayers components. The basemap is a global color-hillshaded HRSC bundle-adjusted DTM mosaic with a resolution of 50 m per pixel. The new bundle-block-adjusted qudrangle mosaics of the MC-11 quadrangle, both image and DTM, are included with opacity slider options. The layer user interface has been adapted on the base of the ol3-layerswitcher and extended by foldable and switchable groups, layer sorting (by resolution, by time and alphabeticallly) and reordering (drag-and-drop). A collapsible time panel accomodates a time slider interface where the user can filter the visible data by a range of Mars or Earth dates and/or by solar longitudes. The visualisation of time-series of single images is controlled by a specific toolbar enabling the workflow of image selection (by point or bounding box), dynamic image loading and playback of single images in a video player-like environment. During a stress-test campaign we could demonstrate that the system is capable of serving up to 10 simultaneous users on its current lightweight development hardware. It is planned to relocate the software to more powerful hardware by the time of this conference. Conclusions/Outlook: The iMars webGIS is an expert tool for the detection and visualization of surface changes. We demonstrate a technique to dynamically retrieve and display single images based on the time-series structure of the data. Together with the multi-temporal database and its MapServer/MapCache backend it provides a stable and high performance environment for the dissemination of the various iMars products. Acknowledgements: This research has received funding from the EU's FP7 Programme under iMars 607379 and by the German Space Agency (DLR Bonn), grant 50 QM 1301 (HRSC on Mars Express).

  15. Robust, Globally Consistent, and Fully-automatic Multi-image Registration and Montage Synthesis for 3-D Multi-channel Images

    PubMed Central

    Tsai, Chia-Ling; Lister, James P.; Bjornsson, Christopher J; Smith, Karen; Shain, William; Barnes, Carol A.; Roysam, Badrinath

    2013-01-01

    The need to map regions of brain tissue that are much wider than the field of view of the microscope arises frequently. One common approach is to collect a series of overlapping partial views, and align them to synthesize a montage covering the entire region of interest. We present a method that advances this approach in multiple ways. Our method (1) produces a globally consistent joint registration of an unorganized collection of 3-D multi-channel images with or without stage micrometer data; (2) produces accurate registrations withstanding changes in scale, rotation, translation and shear by using a 3-D affine transformation model; (3) achieves complete automation, and does not require any parameter settings; (4) handles low and variable overlaps (5 – 15%) between adjacent images, minimizing the number of images required to cover a tissue region; (5) has the self-diagnostic ability to recognize registration failures instead of delivering incorrect results; (6) can handle a broad range of biological images by exploiting generic alignment cues from multiple fluorescence channels without requiring segmentation; and (7) is computationally efficient enough to run on desktop computers regardless of the number of images. The algorithm was tested with several tissue samples of at least 50 image tiles, involving over 5,000 image pairs. It correctly registered all image pairs with an overlap greater than 7%, correctly recognized all failures, and successfully joint-registered all images for all tissue samples studied. This algorithm is disseminated freely to the community as included with the FARSIGHT toolkit for microscopy (www.farsight-toolkit.org). PMID:21361958

  16. Spectral Prior Image Constrained Compressed Sensing (Spectral PICCS) for Photon-Counting Computed Tomography

    PubMed Central

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-01-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878

  17. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  18. Final Report 2007: DOE-FG02-87ER60561

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilbourn, Michael R

    2007-04-26

    This project involved a multi-faceted approach to the improvement of techniques used in Positron Emission Tomography (PET), from radiochemistry to image processing and data analysis. New methods for radiochemical syntheses were examined, new radiochemicals prepared for evaluation and eventual use in human PET studies, and new pre-clinical methods examined for validation of biochemical parameters in animal studies. The value of small animal PET imaging in measuring small changes of in vivo biochemistry was examined and directly compared to traditional tissue sampling techniques. In human imaging studies, the ability to perform single experimental sessions utilizing two overlapping injections of radiopharmaceuticals wasmore » tested, and it was shown that valid biochemical measures for both radiotracers can be obtained through careful pharmacokinetic modeling of the PET emission data. Finally, improvements in reconstruction algorithms for PET data from small animal PET scanners was realized and these have been implemented in commercial releases. Together, the project represented an integrated effort to improve and extend all basic science aspects of PET imaging at both the animal and human level.« less

  19. Oblique Aerial Photography Tool for Building Inspection and Damage Assessment

    NASA Astrophysics Data System (ADS)

    Murtiyoso, A.; Remondino, F.; Rupnik, E.; Nex, F.; Grussenmeyer, P.

    2014-11-01

    Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical sensor technology across a growing geospatial market, as complementary to the traditional vertical views. Multi-camera aerial systems capture not only the conventional nadir views, but also tilted images at the same time. In this paper, a particular use of such imagery in the field of building inspection as well as disaster assessment is addressed. The main idea is to inspect a building from four cardinal directions by using monoplotting functionalities. The developed application allows to measure building height and distances and to digitize man-made structures, creating 3D surfaces and building models. The realized GUI is capable of identifying a building from several oblique points of views, as well as calculates the approximate height of buildings, ground distances and basic vectorization. The geometric accuracy of the results remains a function of several parameters, namely image resolution, quality of available parameters (DEM, calibration and orientation values), user expertise and measuring capability.

  20. Targeting Neuronal-like Metabolism of Metastatic Tumor Cells as a Novel Therapy for Breast Cancer Brain Metastasis

    DTIC Science & Technology

    2017-03-01

    Contribution to Project: Ian primarily focuses on developing tissue imaging pipeline and perform imaging data analysis . Funding Support: Partially...3D ReconsTruction), a multi-faceted image analysis pipeline , permitting quantitative interrogation of functional implications of heterogeneous... analysis pipeline , to observe and quantify phenotypic metastatic landscape heterogeneity in situ with spatial and molecular resolution. Our implementation

  1. Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial systems imagery for wetlands classification

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Abd-Elrahman, Amr

    2018-05-01

    Deep convolutional neural network (DCNN) requires massive training datasets to trigger its image classification power, while collecting training samples for remote sensing application is usually an expensive process. When DCNN is simply implemented with traditional object-based image analysis (OBIA) for classification of Unmanned Aerial systems (UAS) orthoimage, its power may be undermined if the number training samples is relatively small. This research aims to develop a novel OBIA classification approach that can take advantage of DCNN by enriching the training dataset automatically using multi-view data. Specifically, this study introduces a Multi-View Object-based classification using Deep convolutional neural network (MODe) method to process UAS images for land cover classification. MODe conducts the classification on multi-view UAS images instead of directly on the orthoimage, and gets the final results via a voting procedure. 10-fold cross validation results show the mean overall classification accuracy increasing substantially from 65.32%, when DCNN was applied on the orthoimage to 82.08% achieved when MODe was implemented. This study also compared the performances of the support vector machine (SVM) and random forest (RF) classifiers with DCNN under traditional OBIA and the proposed multi-view OBIA frameworks. The results indicate that the advantage of DCNN over traditional classifiers in terms of accuracy is more obvious when these classifiers were applied with the proposed multi-view OBIA framework than when these classifiers were applied within the traditional OBIA framework.

  2. Development and testing of a homogenous multi-wavelength LED light source

    NASA Astrophysics Data System (ADS)

    Bolton, Frank J.; Bernat, Amir; Jacques, Steven L.; Levitz, David

    2017-03-01

    Multispectral imaging of human tissue is a powerful method that allows for quantify scattering and absorption parameters of the tissue and differentiate tissue types or identify pathology. This method requires imaging at multiple wavelengths and then fitting the measured data to a model based on light transport theory. Earlier, a mobile phone based multi-spectral imaging system was developed to image the uterine cervix from the colposcopy geometry, outside the patient's body at a distance of 200-300 mm. Such imaging of a distance object has inherent challenges, as bright and homogenous illumination is required. Several solutions addressing this problem were developed, with varied degrees of success. In this paper, several multi-spectral illumination setups were developed and tested for brightness and uniformity. All setups were specifically designed with low cost in mind, utilizing a printed circuit board with surface-mounted LEDs. The three setups include: LEDs illuminating the target directly, LEDs illuminating focused by a 3D printed miniature lens array, and LEDs coupled to a mixing lens and focusing optical system. In order to compare the illumination uniformity and intensity performance two experiments were performed. Test results are presented, and various tradeoffs between the three system configurations are discussed. Test results are presented, and various tradeoffs between the three system configurations are discussed.

  3. a Sensor Aided H.264/AVC Video Encoder for Aerial Video Sequences with in the Loop Metadata Correction

    NASA Astrophysics Data System (ADS)

    Cicala, L.; Angelino, C. V.; Ruatta, G.; Baccaglini, E.; Raimondo, N.

    2015-08-01

    Unmanned Aerial Vehicles (UAVs) are often employed to collect high resolution images in order to perform image mosaicking and/or 3D reconstruction. Images are usually stored on board and then processed with on-ground desktop software. In such a way the computational load, and hence the power consumption, is moved on ground, leaving on board only the task of storing data. Such an approach is important in the case of small multi-rotorcraft UAVs because of their low endurance due to the short battery life. Images can be stored on board with either still image or video data compression. Still image system are preferred when low frame rates are involved, because video coding systems are based on motion estimation and compensation algorithms which fail when the motion vectors are significantly long and when the overlapping between subsequent frames is very small. In this scenario, UAVs attitude and position metadata from the Inertial Navigation System (INS) can be employed to estimate global motion parameters without video analysis. A low complexity image analysis can be still performed in order to refine the motion field estimated using only the metadata. In this work, we propose to use this refinement step in order to improve the position and attitude estimation produced by the navigation system in order to maximize the encoder performance. Experiments are performed on both simulated and real world video sequences.

  4. Speckle noise reduction in SAR images ship detection

    NASA Astrophysics Data System (ADS)

    Yuan, Ji; Wu, Bin; Yuan, Yuan; Huang, Qingqing; Chen, Jingbo; Ren, Lin

    2012-09-01

    At present, there are two types of method to detect ships in SAR images. One is a direct detection type, detecting ships directly. The other is an indirect detection type. That is, it firstly detects ship wakes, and then seeks ships around wakes. The two types all effect by speckle noise. In order to improve the accuracy of ship detection and get accurate ship and ship wakes parameters, such as ship length, ship width, ship area, the angle of ship wakes and ship outline from SAR images, it is extremely necessary to remove speckle noise in SAR images before data used in various SAR images ship detection. The use of speckle noise reduction filter depends on the specification for a particular application. Some common filters are widely used in speckle noise reduction, such as the mean filter, the median filter, the lee filter, the enhanced lee filter, the Kuan filter, the frost filter, the enhanced frost filter and gamma filter, but these filters represent some disadvantages in SAR image ship detection because of the various types of ship. Therefore, a mathematical function known as the wavelet transform and multi-resolution analysis were used to localize an SAR ocean image into different frequency components or useful subbands, and effectively reduce the speckle in the subbands according to the local statistics within the bands. Finally, the analysis of the statistical results are presented, which demonstrates the advantages and disadvantages of using wavelet shrinkage techniques over standard speckle filters.

  5. Classification algorithm of lung lobe for lung disease cases based on multislice CT images

    NASA Astrophysics Data System (ADS)

    Matsuhiro, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Mishima, M.; Ohmatsu, H.; Tsuchida, T.; Eguchi, K.; Kaneko, M.; Moriyama, N.

    2011-03-01

    With the development of multi-slice CT technology, to obtain an accurate 3D image of lung field in a short time is possible. To support that, a lot of image processing methods need to be developed. In clinical setting for diagnosis of lung cancer, it is important to study and analyse lung structure. Therefore, classification of lung lobe provides useful information for lung cancer analysis. In this report, we describe algorithm which classify lungs into lung lobes for lung disease cases from multi-slice CT images. The classification algorithm of lung lobes is efficiently carried out using information of lung blood vessel, bronchus, and interlobar fissure. Applying the classification algorithms to multi-slice CT images of 20 normal cases and 5 lung disease cases, we demonstrate the usefulness of the proposed algorithms.

  6. Non-rigid multi-frame registration of cell nuclei in live cell fluorescence microscopy image data.

    PubMed

    Tektonidis, Marco; Kim, Il-Han; Chen, Yi-Chun M; Eils, Roland; Spector, David L; Rohr, Karl

    2015-01-01

    The analysis of the motion of subcellular particles in live cell microscopy images is essential for understanding biological processes within cells. For accurate quantification of the particle motion, compensation of the motion and deformation of the cell nucleus is required. We introduce a non-rigid multi-frame registration approach for live cell fluorescence microscopy image data. Compared to existing approaches using pairwise registration, our approach exploits information from multiple consecutive images simultaneously to improve the registration accuracy. We present three intensity-based variants of the multi-frame registration approach and we investigate two different temporal weighting schemes. The approach has been successfully applied to synthetic and live cell microscopy image sequences, and an experimental comparison with non-rigid pairwise registration has been carried out. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Multi-scale image segmentation method with visual saliency constraints and its application

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.

  8. SAR Image Simulation of Ship Targets Based on Multi-Path Scattering

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Wang, H.; Ma, H.; Li, K.; Xia, Z.; Hao, Y.; Guo, H.; Shi, H.; Liao, X.; Yue, H.

    2018-04-01

    Synthetic Aperture Radar (SAR) plays an important role in the classification and recognition of ship targets because of its all-weather working ability and fine resolution. In SAR images, besides the sea clutter, the influence of the sea surface on the radar echo is also known as the so-called multipath effect. These multipath effects will generate some extra "pseudo images", which may cause the distortion of the target image and affect the estimation of the characteristic parameters. In this paper,the multipath effect of rough sea surface and its influence on the estimation of ship characteristic parameters are studied. The imaging of the first and the secondary reflection of sea surface is presented . The artifacts not only overlap with the image of the target itself, but may also appear in the sea near the target area. It is difficult to distinguish them, and this artifact has an effect on the length and width of the ship.

  9. On the fallacy of quantitative segmentation for T1-weighted MRI

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Harrigan, Robert L.; Newton, Allen T.; Rane, Swati; Pallavaram, Srivatsan; D'Haese, Pierre F.; Dawant, Benoit M.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    T1-weighted magnetic resonance imaging (MRI) generates contrasts with primary sensitivity to local T1 properties (with lesser T2 and PD contributions). The observed signal intensity is determined by these local properties and the sequence parameters of the acquisition. In common practice, a range of acceptable parameters is used to ensure "similar" contrast across scanners used for any particular study (e.g., the ADNI standard MPRAGE). However, different studies may use different ranges of parameters and report the derived data as simply "T1-weighted". Physics and imaging authors pay strong heed to the specifics of the imaging sequences, but image processing authors have historically been more lax. Herein, we consider three T1-weighted sequences acquired the same underlying protocol (MPRAGE) and vendor (Philips), but "normal study-to-study variation" in parameters. We show that the gray matter/white matter/cerebrospinal fluid contrast is subtly but systemically different between these images and yields systemically different measurements of brain volume. The problem derives from the visually apparent boundary shifts, which would also be seen by a human rater. We present and evaluate two solutions to produce consistent segmentation results across imaging protocols. First, we propose to acquire multiple sequences on a subset of the data and use the multi-modal imaging as atlases to segment target images any of the available sequences. Second (if additional imaging is not available), we propose to synthesize atlases of the target imaging sequence and use the synthesized atlases in place of atlas imaging data. Both approaches significantly improve consistency of target labeling.

  10. Coastal flooding as a parameter in multi-criteria analysis for industrial site selection

    NASA Astrophysics Data System (ADS)

    Christina, C.; Memos, C.; Diakoulaki, D.

    2014-12-01

    Natural hazards can trigger major industrial accidents, which apart from affecting industrial installations may cause a series of accidents with serious impacts on human health and the environment far beyond the site boundary. Such accidents, also called Na-Tech (natural - technical) accidents, deserve particular attention since they can cause release of hazardous substances possibly resulting in severe environmental pollution, explosions and/or fires. There are different kinds of natural events or, in general terms, of natural causes of industrial accidents, such as landslides, hurricanes, high winds, tsunamis, lightning, cold/hot temperature, floods, heavy rains etc that have caused accidents. The scope of this paper is to examine the coastal flooding as a parameter in causing an industrial accident, such as the nuclear disaster in Fukushima, Japan, and the critical role of this parameter in industrial site selection. Land use planning is a complex procedure that requires multi-criteria decision analysis involving economic, environmental and social parameters. In this context the parameter of a natural hazard occurrence, such as coastal flooding, for industrial site selection should be set by the decision makers. In this paper it is evaluated the influence that has in the outcome of a multi-criteria decision analysis for industrial spatial planning the parameter of an accident risk triggered by coastal flooding. The latter is analyzed in the context of both sea-and-inland induced flooding.

  11. MUSIC algorithm for location searching of dielectric anomalies from S-parameters using microwave imaging

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang; Kim, Hwa Pyung; Lee, Kwang-Jae; Son, Seong-Ho

    2017-11-01

    Motivated by the biomedical engineering used in early-stage breast cancer detection, we investigated the use of MUltiple SIgnal Classification (MUSIC) algorithm for location searching of small anomalies using S-parameters. We considered the application of MUSIC to functional imaging where a small number of dipole antennas are used. Our approach is based on the application of Born approximation or physical factorization. We analyzed cases in which the anomaly is respectively small and large in relation to the wavelength, and the structure of the left-singular vectors is linked to the nonzero singular values of a Multi-Static Response (MSR) matrix whose elements are the S-parameters. Using simulations, we demonstrated the strengths and weaknesses of the MUSIC algorithm in detecting both small and extended anomalies.

  12. Converting Multi-Shell and Diffusion Spectrum Imaging to High Angular Resolution Diffusion Imaging

    PubMed Central

    Yeh, Fang-Cheng; Verstynen, Timothy D.

    2016-01-01

    Multi-shell and diffusion spectrum imaging (DSI) are becoming increasingly popular methods of acquiring diffusion MRI data in a research context. However, single-shell acquisitions, such as diffusion tensor imaging (DTI) and high angular resolution diffusion imaging (HARDI), still remain the most common acquisition schemes in practice. Here we tested whether multi-shell and DSI data have conversion flexibility to be interpolated into corresponding HARDI data. We acquired multi-shell and DSI data on both a phantom and in vivo human tissue and converted them to HARDI. The correlation and difference between their diffusion signals, anisotropy values, diffusivity measurements, fiber orientations, connectivity matrices, and network measures were examined. Our analysis result showed that the diffusion signals, anisotropy, diffusivity, and connectivity matrix of the HARDI converted from multi-shell and DSI were highly correlated with those of the HARDI acquired on the MR scanner, with correlation coefficients around 0.8~0.9. The average angular error between converted and original HARDI was 20.7° at voxels with signal-to-noise ratios greater than 5. The network topology measures had less than 2% difference, whereas the average nodal measures had a percentage difference around 4~7%. In general, multi-shell and DSI acquisitions can be converted to their corresponding single-shell HARDI with high fidelity. This supports multi-shell and DSI acquisitions over HARDI acquisition as the scheme of choice for diffusion acquisitions. PMID:27683539

  13. The Performance Analysis Based on SAR Sample Covariance Matrix

    PubMed Central

    Erten, Esra

    2012-01-01

    Multi-channel systems appear in several fields of application in science. In the Synthetic Aperture Radar (SAR) context, multi-channel systems may refer to different domains, as multi-polarization, multi-interferometric or multi-temporal data, or even a combination of them. Due to the inherent speckle phenomenon present in SAR images, the statistical description of the data is almost mandatory for its utilization. The complex images acquired over natural media present in general zero-mean circular Gaussian characteristics. In this case, second order statistics as the multi-channel covariance matrix fully describe the data. For practical situations however, the covariance matrix has to be estimated using a limited number of samples, and this sample covariance matrix follow the complex Wishart distribution. In this context, the eigendecomposition of the multi-channel covariance matrix has been shown in different areas of high relevance regarding the physical properties of the imaged scene. Specifically, the maximum eigenvalue of the covariance matrix has been frequently used in different applications as target or change detection, estimation of the dominant scattering mechanism in polarimetric data, moving target indication, etc. In this paper, the statistical behavior of the maximum eigenvalue derived from the eigendecomposition of the sample multi-channel covariance matrix in terms of multi-channel SAR images is simplified for SAR community. Validation is performed against simulated data and examples of estimation and detection problems using the analytical expressions are as well given. PMID:22736976

  14. Diagnostics for PLX-alpha

    NASA Astrophysics Data System (ADS)

    Gilmore, Mark; Hsu, Scott

    2015-11-01

    The goal of the Plasma Liner eXperiment PLX-alpha at Los Alamos National Laboratory is to establish the viability of creating a spherically imploding plasma liner for MIF and HED applications, using a spherical array of supersonic plasma jets launched by innovative contoured-gap coaxial plasma guns. PLX- α experiments will focus in particular on establishing the ram pressure and uniformity scalings of partial and fully spherical plasma liners. In order to characterize these parameters experimentally, a suite of diagnostics is planned, including multi-camera fast imaging, a 16-channel visible interferometer (upgraded from 8 channels) with reconfigurable, fiber-coupled front end, and visible and VUV high-resolution and survey spectroscopy. Tomographic reconstruction and data fusion techniques will be used in conjunction with interferometry, imaging, and synthetic diagnostics from modeling to characterize liner uniformity in 3D. Diagnostic and data analysis design, implementation, and status will be presented. Supported by the Advanced Research Projects Agency - Energy - U.S. Department of Energy.

  15. Eddy current imaging for electrical characterization of silicon solar cells and TCO layers

    NASA Astrophysics Data System (ADS)

    Hwang, Byungguk; Hillmann, Susanne; Schulze, Martin; Klein, Marcus; Heuer, Henning

    2015-03-01

    Eddy Current Testing has been mainly used to determine defects of conductive materials and wall thicknesses in heavy industries such as construction or aerospace. Recently, high frequency Eddy Current imaging technology was developed. This enables the acquirement of information of different depth level in conductive thin-film structures by realizing proper standard penetration depth. In this paper, we summarize the state of the art applications focusing on PV industry and extend the analysis implementing achievements by applying spatially resolved Eddy Current Testing. The specific state of frequency and complex phase angle rotation demonstrates diverse defects from front to back side of silicon solar cells and characterizes homogeneity of sheet resistance in Transparent Conductive Oxide (TCO) layers. In order to verify technical feasibility, measurement results from the Multi Parameter Eddy Current Scanner, MPECS are compared to the results from Electroluminescence.

  16. Microstructural and Defect Characterization in Ceramic Composites Using an Ultrasonic Guided Wave Scan System

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Cosgriff, L. M.; Martin, R. E.; Verrilli, M. J.; Bhatt, R. T.

    2003-01-01

    In this study, an ultrasonic guided wave scan system was used to characterize various microstructural and flaw conditions in two types of ceramic matrix composites, SiC/SiC and C/SiC. Rather than attempting to isolate specific lamb wave modes to use for characterization (as is desired for many types of guided wave inspection problems), the guided wave scan system utilizes the total (multi-mode) ultrasonic response in its inspection analysis. Several time and frequency-domain parameters are calculated from the ultrasonic guided wave signal at each scan location to form images. Microstructural and defect conditions examined include delamination, density variation, cracking, and pre/ post-infiltration. Results are compared with thermographic imaging methods. Although the guided wave technique is commonly used so scanning can be eliminated, applying the technique in the scanning mode allows a more precise characterization of defect conditions.

  17. A detailed comparison of single-camera light-field PIV and tomographic PIV

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  18. Uncertainty Quantification in Multi-Scale Coronary Simulations Using Multi-resolution Expansion

    NASA Astrophysics Data System (ADS)

    Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison

    2016-11-01

    Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in surgical planning and research on disease propagation. In this study, patient-specific geometries of the aorta and coronary arteries are constructed from CT imaging data and finite element flow simulations are carried out using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of vascular hemodynamics and coronary physiology, are used as coupled boundary conditions for the solver. The outputs of these simulations depend on a set of clinically-derived input parameters that define the geometry and boundary conditions, however their values are subjected to uncertainty. We quantify the effects of uncertainty from two sources: uncertainty in the material properties of the vessel wall and uncertainty in the lumped parameter models whose values are estimated by assimilating patient-specific clinical and literature data. We use a generalized multi-resolution chaos approach to propagate the uncertainty. The advantages of this approach lies in its ability to support inputs sampled from arbitrary distributions and its built-in adaptivity that efficiently approximates stochastic responses characterized by steep gradients.

  19. Estimate the effective connectivity in multi-coupled neural mass model using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shan, Bonan; Wang, Jiang; Deng, Bin; Zhang, Zhen; Wei, Xile

    2017-03-01

    Assessment of the effective connectivity among different brain regions during seizure is a crucial problem in neuroscience today. As a consequence, a new model inversion framework of brain function imaging is introduced in this manuscript. This framework is based on approximating brain networks using a multi-coupled neural mass model (NMM). NMM describes the excitatory and inhibitory neural interactions, capturing the mechanisms involved in seizure initiation, evolution and termination. Particle swarm optimization method is used to estimate the effective connectivity variation (the parameters of NMM) and the epileptiform dynamics (the states of NMM) that cannot be directly measured using electrophysiological measurement alone. The estimated effective connectivity includes both the local connectivity parameters within a single region NMM and the remote connectivity parameters between multi-coupled NMMs. When the epileptiform activities are estimated, a proportional-integral controller outputs control signal so that the epileptiform spikes can be inhibited immediately. Numerical simulations are carried out to illustrate the effectiveness of the proposed framework. The framework and the results have a profound impact on the way we detect and treat epilepsy.

  20. The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and its satellites

    NASA Astrophysics Data System (ADS)

    Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; Elgner, S.; Erkeling, G.; Fueten, F.; Hiesinger, H.; Hoekzema, N. M.; Kersten, E.; Loizeau, D.; Matz, K.-D.; McGuire, P. C.; Mertens, V.; Michael, G.; Pasewaldt, A.; Pinet, P.; Preusker, F.; Reiss, D.; Roatsch, T.; Schmidt, R.; Scholten, F.; Spiegel, M.; Stesky, R.; Tirsch, D.; van Gasselt, S.; Walter, S.; Wählisch, M.; Willner, K.

    2016-07-01

    The High Resolution Stereo Camera (HRSC) of ESA's Mars Express is designed to map and investigate the topography of Mars. The camera, in particular its Super Resolution Channel (SRC), also obtains images of Phobos and Deimos on a regular basis. As HRSC is a push broom scanning instrument with nine CCD line detectors mounted in parallel, its unique feature is the ability to obtain along-track stereo images and four colors during a single orbital pass. The sub-pixel accuracy of 3D points derived from stereo analysis allows producing DTMs with grid size of up to 50 m and height accuracy on the order of one image ground pixel and better, as well as corresponding orthoimages. Such data products have been produced systematically for approximately 40% of the surface of Mars so far, while global shape models and a near-global orthoimage mosaic could be produced for Phobos. HRSC is also unique because it bridges between laser altimetry and topography data derived from other stereo imaging instruments, and provides geodetic reference data and geological context to a variety of non-stereo datasets. This paper, in addition to an overview of the status and evolution of the experiment, provides a review of relevant methods applied for 3D reconstruction and mapping, and respective achievements. We will also review the methodology of specific approaches to science analysis based on joint analysis of DTM and orthoimage information, or benefitting from high accuracy of co-registration between multiple datasets, such as studies using multi-temporal or multi-angular observations, from the fields of geomorphology, structural geology, compositional mapping, and atmospheric science. Related exemplary results from analysis of HRSC data will be discussed. After 10 years of operation, HRSC covered about 70% of the surface by panchromatic images at 10-20 m/pixel, and about 97% at better than 100 m/pixel. As the areas with contiguous coverage by stereo data are increasingly abundant, we also present original data related to the analysis of image blocks and address methodology aspects of newly established procedures for the generation of multi-orbit DTMs and image mosaics. The current results suggest that multi-orbit DTMs with grid spacing of 50 m can be feasible for large parts of the surface, as well as brightness-adjusted image mosaics with co-registration accuracy of adjacent strips on the order of one pixel, and at the highest image resolution available. These characteristics are demonstrated by regional multi-orbit data products covering the MC-11 (East) quadrangle of Mars, representing the first prototype of a new HRSC data product level.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Da Rio, Nicola; Robberto, Massimo, E-mail: ndario@rssd.esa.int

    We present the Tool for Astrophysical Data Analysis (TA-DA), a new software aimed to greatly simplify and improve the analysis of stellar photometric data in comparison with theoretical models, and allow the derivation of stellar parameters from multi-band photometry. Its flexibility allows one to address a number of such problems: from the interpolation of stellar models, or sets of stellar physical parameters in general, to the computation of synthetic photometry in arbitrary filters or units; from the analysis of observed color-magnitude diagrams to a Bayesian derivation of stellar parameters (and extinction) based on multi-band data. TA-DA is available as amore » pre-compiled Interactive Data Language widget-based application; its graphical user interface makes it considerably user-friendly. In this paper, we describe the software and its functionalities.« less

  2. Large-area settlement pattern recognition from Landsat-8 data

    NASA Astrophysics Data System (ADS)

    Wieland, Marc; Pittore, Massimiliano

    2016-09-01

    The study presents an image processing and analysis pipeline that combines object-based image analysis with a Support Vector Machine to derive a multi-layered settlement product from Landsat-8 data over large areas. 43 image scenes are processed over large parts of Central Asia (Southern Kazakhstan, Kyrgyzstan, Tajikistan and Eastern Uzbekistan). The main tasks tackled by this work include built-up area identification, settlement type classification and urban structure types pattern recognition. Besides commonly used accuracy assessments of the resulting map products, thorough performance evaluations are carried out under varying conditions to tune algorithm parameters and assess their applicability for the given tasks. As part of this, several research questions are being addressed. In particular the influence of the improved spatial and spectral resolution of Landsat-8 on the SVM performance to identify built-up areas and urban structure types are evaluated. Also the influence of an extended feature space including digital elevation model features is tested for mountainous regions. Moreover, the spatial distribution of classification uncertainties is analyzed and compared to the heterogeneity of the building stock within the computational unit of the segments. The study concludes that the information content of Landsat-8 images is sufficient for the tested classification tasks and even detailed urban structures could be extracted with satisfying accuracy. Freely available ancillary settlement point location data could further improve the built-up area classification. Digital elevation features and pan-sharpening could, however, not significantly improve the classification results. The study highlights the importance of dynamically tuned classifier parameters, and underlines the use of Shannon entropy computed from the soft answers of the SVM as a valid measure of the spatial distribution of classification uncertainties.

  3. Quantification of chromatin condensation level by image processing.

    PubMed

    Irianto, Jerome; Lee, David A; Knight, Martin M

    2014-03-01

    The level of chromatin condensation is related to the silencing/activation of chromosomal territories and therefore impacts on gene expression. Chromatin condensation changes during cell cycle, progression and differentiation, and is influenced by various physicochemical and epigenetic factors. This study describes a validated experimental technique to quantify chromatin condensation. A novel image processing procedure is developed using Sobel edge detection to quantify the level of chromatin condensation from nuclei images taken by confocal microscopy. The algorithm was developed in MATLAB and used to quantify different levels of chromatin condensation in chondrocyte nuclei achieved through alteration in osmotic pressure. The resulting chromatin condensation parameter (CCP) is in good agreement with independent multi-observer qualitative visual assessment. This image processing technique thereby provides a validated unbiased parameter for rapid and highly reproducible quantification of the level of chromatin condensation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies.

    PubMed

    Koch, Lisa M; Rajchl, Martin; Bai, Wenjia; Baumgartner, Christian F; Tong, Tong; Passerat-Palmbach, Jonathan; Aljabar, Paul; Rueckert, Daniel

    2017-08-22

    Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation.

  5. Calibration and analysis of a multimodal micro-CT and structured light imaging system for the evaluation of excised breast tissue

    NASA Astrophysics Data System (ADS)

    McClatchy, David M., III; Rizzo, Elizabeth J.; Meganck, Jeff; Kempner, Josh; Vicory, Jared; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.

    2017-12-01

    A multimodal micro-computed tomography (CT) and multi-spectral structured light imaging (SLI) system is introduced and systematically analyzed to test its feasibility to aid in margin delineation during breast conserving surgery (BCS). Phantom analysis of the micro-CT yielded a signal-to-noise ratio of 34, a contrast of 1.64, and a minimum detectable resolution of 240 μm for a 1.2 min scan. The SLI system, spanning wavelengths 490 nm to 800 nm and spatial frequencies up to 1.37 mm-1 , was evaluated with aqueous tissue simulating phantoms having variations in particle size distribution, scatter density, and blood volume fraction. The reduced scattering coefficient, μs\\prime and phase function parameter, γ, were accurately recovered over all wavelengths independent of blood volume fractions from 0% to 4%, assuming a flat sample geometry perpendicular to the imaging plane. The resolution of the optical system was tested with a step phantom, from which the modulation transfer function was calculated yielding a maximum resolution of 3.78 cycles per mm. The three dimensional spatial co-registration between the CT and optical imaging space was tested and shown to be accurate within 0.7 mm. A freshly resected breast specimen, with lobular carcinoma, fibrocystic disease, and adipose, was imaged with the system. The micro-CT provided visualization of the tumor mass and its spiculations, and SLI yielded superficial quantification of light scattering parameters for the malignant and benign tissue types. These results appear to be the first demonstration of SLI combined with standard medical tomography for imaging excised tumor specimens. While further investigations are needed to determine and test the spectral, spatial, and CT features required to classify tissue, this study demonstrates the ability of multimodal CT/SLI to quantify, visualize, and spatially navigate breast tumor specimens, which could potentially aid in the assessment of tumor margin status during BCS.

  6. Evaluation for Bearing Wear States Based on Online Oil Multi-Parameters Monitoring

    PubMed Central

    Hu, Hai-Feng

    2018-01-01

    As bearings are critical components of a mechanical system, it is important to characterize their wear states and evaluate health conditions. In this paper, a novel approach for analyzing the relationship between online oil multi-parameter monitoring samples and bearing wear states has been proposed based on an improved gray k-means clustering model (G-KCM). First, an online monitoring system with multiple sensors for bearings is established, obtaining oil multi-parameter data and vibration signals for bearings through the whole lifetime. Secondly, a gray correlation degree distance matrix is generated using a gray correlation model (GCM) to express the relationship of oil monitoring samples at different times and then a KCM is applied to cluster the matrix. Analysis and experimental results show that there is an obvious correspondence that state changing coincides basically in time between the lubricants’ multi-parameters and the bearings’ wear states. It also has shown that online oil samples with multi-parameters have early wear failure prediction ability for bearings superior to vibration signals. It is expected to realize online oil monitoring and evaluation for bearing health condition and to provide a novel approach for early identification of bearing-related failure modes. PMID:29621175

  7. Evaluation for Bearing Wear States Based on Online Oil Multi-Parameters Monitoring.

    PubMed

    Wang, Si-Yuan; Yang, Ding-Xin; Hu, Hai-Feng

    2018-04-05

    As bearings are critical components of a mechanical system, it is important to characterize their wear states and evaluate health conditions. In this paper, a novel approach for analyzing the relationship between online oil multi-parameter monitoring samples and bearing wear states has been proposed based on an improved gray k-means clustering model (G-KCM). First, an online monitoring system with multiple sensors for bearings is established, obtaining oil multi-parameter data and vibration signals for bearings through the whole lifetime. Secondly, a gray correlation degree distance matrix is generated using a gray correlation model (GCM) to express the relationship of oil monitoring samples at different times and then a KCM is applied to cluster the matrix. Analysis and experimental results show that there is an obvious correspondence that state changing coincides basically in time between the lubricants' multi-parameters and the bearings' wear states. It also has shown that online oil samples with multi-parameters have early wear failure prediction ability for bearings superior to vibration signals. It is expected to realize online oil monitoring and evaluation for bearing health condition and to provide a novel approach for early identification of bearing-related failure modes.

  8. Assessment of beating parameters in human induced pluripotent stem cells enables quantitative in vitro screening for cardiotoxicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sirenko, Oksana, E-mail: oksana.sirenko@moldev.com; Cromwell, Evan F., E-mail: evan.cromwell@moldev.com; Crittenden, Carole

    2013-12-15

    Human induced pluripotent stem cell (iPSC)-derived cardiomyocytes show promise for screening during early drug development. Here, we tested a hypothesis that in vitro assessment of multiple cardiomyocyte physiological parameters enables predictive and mechanistically-interpretable evaluation of cardiotoxicity in a high-throughput format. Human iPSC-derived cardiomyocytes were exposed for 30 min or 24 h to 131 drugs, positive (107) and negative (24) for in vivo cardiotoxicity, in up to 6 concentrations (3 nM to 30 uM) in 384-well plates. Fast kinetic imaging was used to monitor changes in cardiomyocyte function using intracellular Ca{sup 2+} flux readouts synchronous with beating, and cell viability. Amore » number of physiological parameters of cardiomyocyte beating, such as beat rate, peak shape (amplitude, width, raise, decay, etc.) and regularity were collected using automated data analysis. Concentration–response profiles were evaluated using logistic modeling to derive a benchmark concentration (BMC) point-of-departure value, based on one standard deviation departure from the estimated baseline in vehicle (0.3% dimethyl sulfoxide)-treated cells. BMC values were used for cardiotoxicity classification and ranking of compounds. Beat rate and several peak shape parameters were found to be good predictors, while cell viability had poor classification accuracy. In addition, we applied the Toxicological Prioritization Index (ToxPi) approach to integrate and display data across many collected parameters, to derive “cardiosafety” ranking of tested compounds. Multi-parameter screening of beating profiles allows for cardiotoxicity risk assessment and identification of specific patterns defining mechanism-specific effects. These data and analysis methods may be used widely for compound screening and early safety evaluation in drug development. - Highlights: • Induced pluripotent stem cell-derived cardiomyocytes are promising in vitro models. • We tested if evaluation of cardiotoxicity is possible in a high-throughput format. • The assay shows benefits of automated data integration across multiple parameters. • Quantitative assessment of concentration–response is possible using iPSCs. • Multi-parametric screening allows for cardiotoxicity risk assessment.« less

  9. A Spectralon BRF Data Base for MISR Calibration Application

    NASA Technical Reports Server (NTRS)

    Bruegge, C.; Chrien, N.; Haner, D.

    1999-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) is an Earth observing sensor which will provide global retrievals of aerosols, clouds, and land surface parameters. Instrument specifications require high accuracy absolute calibration, as well as accurate camera-to-camera, band-to-band and pixel-to-pixel relative response determinations.

  10. Preface of 16th International conference on Defects, Recognition, Imaging and Physics in Semiconductors

    NASA Astrophysics Data System (ADS)

    Yang, Deren; Xu, Ke

    2016-11-01

    The 16th International conference on Defects-Recognition, Imaging and Physics in Semiconductors (DRIP-XVI) was held at the Worldhotel Grand Dushulake in Suzhou, China from 6th to 10th September 2015, around the 30th anniversary of the first DRIP conference. It was hosted by the Suzhou Institute of Nano-tech and Nano-bionics (SINANO), Chinese Academy of Sciences. On this occasion, about one hundred participants from nineteen countries attended the event. And a wide range of subjects were addressed during the conference: physics of point and extended defects in semiconductors: origin, electrical, optical and magnetic properties of defects; diagnostics techniques of crystal growth and processing of semiconductor materials (in-situ and process control); device imaging and mapping to evaluate performance and reliability; defect analysis in degraded optoelectronic and electronic devices; imaging techniques and instruments (proximity probe, x-ray, electron beam, non-contact electrical, optical and thermal imaging techniques, etc.); new frontiers of atomic-scale-defect assessment (STM, AFM, SNOM, ballistic electron energy microscopy, TEM, etc.); new approaches for multi-physic-parameter characterization with Nano-scale space resolution. Within these subjects, there were 58 talks, of which 18 invited, and 50 posters.

  11. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    PubMed

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  12. The Montage Image Mosaic Toolkit As A Visualization Engine.

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Lerias, Angela; Good, John; Mandel, Eric; Pepper, Joshua

    2018-01-01

    The Montage toolkit has since 2003 been used to aggregate FITS images into mosaics for science analysis. It is now finding application as an engine for image visualization. One important reason is that the functionality developed for creating mosaics is also valuable in image visualization. An equally important (though perhaps less obvious) reason is that Montage is portable and is built on standard astrophysics toolkits, making it very easy to integrate into new environments. Montage models and rectifies the sky background to a common level and thus reveals faint, diffuse features; it offers an adaptive image stretching method that preserves the dynamic range of a FITS image when represented in PNG format; it provides utilities for creating cutouts of large images and downsampled versions of large images that can then be visualized on desktops or in browsers; it contains a fast reprojection algorithm intended for visualization; and it resamples and reprojects images to a common grid for subsequent multi-color visualization.This poster will highlight these visualization capabilities with the following examples:1. Creation of down-sampled multi-color images of a 16-wavelength Infrared Atlas of the Galactic Plane, sampled at 1 arcsec when created2. Integration into web-based image processing environment: JS9 is an interactive image display service for web browsers, desktops and mobile devices. It exploits the flux-preserving reprojection algorithms in Montage to transform diverse images to common image parameters for display. Select Montage programs have been compiled to Javascript/WebAssembly using the Emscripten compiler, which allows our reprojection algorithms to run in browsers at close to native speed.3. Creation of complex sky coverage maps: an multicolor all-sky map that shows the sky coverage of the Kepler and K2, KELT and TESS projects, overlaid on an all-sky 2MASS image.Montage is funded by the National Science Foundation under Grant Number ACI-1642453. JS9 is funded by the Chandra X-ray Center (NAS8-03060) and NASA's Universe of Learning (STScI-509913).

  13. Quadratic Frequency Modulation Signals Parameter Estimation Based on Two-Dimensional Product Modified Parameterized Chirp Rate-Quadratic Chirp Rate Distribution.

    PubMed

    Qu, Zhiyu; Qu, Fuxin; Hou, Changbo; Jing, Fulong

    2018-05-19

    In an inverse synthetic aperture radar (ISAR) imaging system for targets with complex motion, the azimuth echo signals of the target are always modeled as multicomponent quadratic frequency modulation (QFM) signals. The chirp rate (CR) and quadratic chirp rate (QCR) estimation of QFM signals is very important to solve the ISAR image defocus problem. For multicomponent QFM (multi-QFM) signals, the conventional QR and QCR estimation algorithms suffer from the cross-term and poor anti-noise ability. This paper proposes a novel estimation algorithm called a two-dimensional product modified parameterized chirp rate-quadratic chirp rate distribution (2D-PMPCRD) for QFM signals parameter estimation. The 2D-PMPCRD employs a multi-scale parametric symmetric self-correlation function and modified nonuniform fast Fourier transform-Fast Fourier transform to transform the signals into the chirp rate-quadratic chirp rate (CR-QCR) domains. It can greatly suppress the cross-terms while strengthening the auto-terms by multiplying different CR-QCR domains with different scale factors. Compared with high order ambiguity function-integrated cubic phase function and modified Lv's distribution, the simulation results verify that the 2D-PMPCRD acquires higher anti-noise performance and obtains better cross-terms suppression performance for multi-QFM signals with reasonable computation cost.

  14. Multi-objective optimization of process parameters of multi-step shaft formed with cross wedge rolling based on orthogonal test

    NASA Astrophysics Data System (ADS)

    Han, S. T.; Shu, X. D.; Shchukin, V.; Kozhevnikova, G.

    2018-06-01

    In order to achieve reasonable process parameters in forming multi-step shaft by cross wedge rolling, the research studied the rolling-forming process multi-step shaft on the DEFORM-3D finite element software. The interactive orthogonal experiment was used to study the effect of the eight parameters, the first section shrinkage rate φ1, the first forming angle α1, the first spreading angle β1, the first spreading length L1, the second section shrinkage rate φ2, the second forming angle α2, the second spreading angle β2 and the second spreading length L2, on the quality of shaft end and the microstructure uniformity. By using the fuzzy mathematics comprehensive evaluation method and the extreme difference analysis, the influence degree of the process parameters on the quality of the multi-step shaft is obtained: β2>φ2L1>α1>β1>φ1>α2L2. The results of the study can provide guidance for obtaining multi-stepped shaft with high mechanical properties and achieving near net forming without stub bar in cross wedge rolling.

  15. Feature and Intensity Based Medical Image Registration Using Particle Swarm Optimization.

    PubMed

    Abdel-Basset, Mohamed; Fakhry, Ahmed E; El-Henawy, Ibrahim; Qiu, Tie; Sangaiah, Arun Kumar

    2017-11-03

    Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.

  16. Multivariate meta-analysis for non-linear and other multi-parameter associations

    PubMed Central

    Gasparrini, A; Armstrong, B; Kenward, M G

    2012-01-01

    In this paper, we formalize the application of multivariate meta-analysis and meta-regression to synthesize estimates of multi-parameter associations obtained from different studies. This modelling approach extends the standard two-stage analysis used to combine results across different sub-groups or populations. The most straightforward application is for the meta-analysis of non-linear relationships, described for example by regression coefficients of splines or other functions, but the methodology easily generalizes to any setting where complex associations are described by multiple correlated parameters. The modelling framework of multivariate meta-analysis is implemented in the package mvmeta within the statistical environment R. As an illustrative example, we propose a two-stage analysis for investigating the non-linear exposure–response relationship between temperature and non-accidental mortality using time-series data from multiple cities. Multivariate meta-analysis represents a useful analytical tool for studying complex associations through a two-stage procedure. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22807043

  17. Detecting Multi-scale Structures in Chandra Images of Centaurus A

    NASA Astrophysics Data System (ADS)

    Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.

    1999-12-01

    Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.

  18. Reparametrization-based estimation of genetic parameters in multi-trait animal model using Integrated Nested Laplace Approximation.

    PubMed

    Mathew, Boby; Holand, Anna Marie; Koistinen, Petri; Léon, Jens; Sillanpää, Mikko J

    2016-02-01

    A novel reparametrization-based INLA approach as a fast alternative to MCMC for the Bayesian estimation of genetic parameters in multivariate animal model is presented. Multi-trait genetic parameter estimation is a relevant topic in animal and plant breeding programs because multi-trait analysis can take into account the genetic correlation between different traits and that significantly improves the accuracy of the genetic parameter estimates. Generally, multi-trait analysis is computationally demanding and requires initial estimates of genetic and residual correlations among the traits, while those are difficult to obtain. In this study, we illustrate how to reparametrize covariance matrices of a multivariate animal model/animal models using modified Cholesky decompositions. This reparametrization-based approach is used in the Integrated Nested Laplace Approximation (INLA) methodology to estimate genetic parameters of multivariate animal model. Immediate benefits are: (1) to avoid difficulties of finding good starting values for analysis which can be a problem, for example in Restricted Maximum Likelihood (REML); (2) Bayesian estimation of (co)variance components using INLA is faster to execute than using Markov Chain Monte Carlo (MCMC) especially when realized relationship matrices are dense. The slight drawback is that priors for covariance matrices are assigned for elements of the Cholesky factor but not directly to the covariance matrix elements as in MCMC. Additionally, we illustrate the concordance of the INLA results with the traditional methods like MCMC and REML approaches. We also present results obtained from simulated data sets with replicates and field data in rice.

  19. Analysis of airborne MAIS imaging spectrometric data for mineral exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Jinnian; Zheng Lanfen; Tong Qingxi

    1996-11-01

    The high spectral resolution imaging spectrometric system made quantitative analysis and mapping of surface composition possible. The key issue will be the quantitative approach for analysis of surface parameters for imaging spectrometer data. This paper describes the methods and the stages of quantitative analysis. (1) Extracting surface reflectance from imaging spectrometer image. Lab. and inflight field measurements are conducted for calibration of imaging spectrometer data, and the atmospheric correction has also been used to obtain ground reflectance by using empirical line method and radiation transfer modeling. (2) Determining quantitative relationship between absorption band parameters from the imaging spectrometer data andmore » chemical composition of minerals. (3) Spectral comparison between the spectra of spectral library and the spectra derived from the imagery. The wavelet analysis-based spectrum-matching techniques for quantitative analysis of imaging spectrometer data has beer, developed. Airborne MAIS imaging spectrometer data were used for analysis and the analysis results have been applied to the mineral and petroleum exploration in Tarim Basin area china. 8 refs., 8 figs.« less

  20. High-resolution computational ghost imaging and ghost diffraction through turbulence via a beam-shaping method

    NASA Astrophysics Data System (ADS)

    Luo, Chun-Ling; Zhuo, Ling-Qing

    2017-01-01

    Imaging through atmospheric turbulence is a topic with a long history and grand challenges still exist in the remote sensing and astro observation fields. In this letter, we try to propose a simple scheme to improve the resolution of imaging through turbulence based on the computational ghost imaging (CGI) and computational ghost diffraction (CGD) setup via the laser beam shaping techniques. A unified theory of CGI and CGD through turbulence with the multi-Gaussian shaped incoherent source is developed, and numerical examples are given to see clearly the effects of the system parameters to CGI and CGD. Our results show that the atmospheric effect to the CGI and CGD system is closely related to the propagation distance between the source and the object. In addition, by properly increasing the beam order of the multi-Gaussian source, we can improve the resolution of CGI and CGD through turbulence relative to the commonly used Gaussian source. Therefore our results may find applications in remote sensing and astro observation.

  1. Novel methods for parameter-based analysis of myocardial tissue in MR images

    NASA Astrophysics Data System (ADS)

    Hennemuth, A.; Behrens, S.; Kuehnel, C.; Oeltze, S.; Konrad, O.; Peitgen, H.-O.

    2007-03-01

    The analysis of myocardial tissue with contrast-enhanced MR yields multiple parameters, which can be used to classify the examined tissue. Perfusion images are often distorted by motion, while late enhancement images are acquired with a different size and resolution. Therefore, it is common to reduce the analysis to a visual inspection, or to the examination of parameters related to the 17-segment-model proposed by the American Heart Association (AHA). As this simplification comes along with a considerable loss of information, our purpose is to provide methods for a more accurate analysis regarding topological and functional tissue features. In order to achieve this, we implemented registration methods for the motion correction of the perfusion sequence and the matching of the late enhancement information onto the perfusion image and vice versa. For the motion corrected perfusion sequence, vector images containing the voxel enhancement curves' semi-quantitative parameters are derived. The resulting vector images are combined with the late enhancement information and form the basis for the tissue examination. For the exploration of data we propose different modes: the inspection of the enhancement curves and parameter distribution in areas automatically segmented using the late enhancement information, the inspection of regions segmented in parameter space by user defined threshold intervals and the topological comparison of regions segmented with different settings. Results showed a more accurate detection of distorted regions in comparison to the AHA-model-based evaluation.

  2. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  3. Photometric Follow-up of Eclipsing Binary Candidates from KELT and Kepler

    NASA Astrophysics Data System (ADS)

    Garcia Soto, Aylin; Rodriguez, Joseph E.; Bieryla, Allyson; KELT survey

    2018-01-01

    Eclipsing binaries (EBs) are incredibly valuable, as they provide the opportunity to precisely measure fundamental stellar parameters without the need for stellar models. Therefore, we can use EBs to directly test stellar evolution models. Constraining the stellar properties of stars is important since they directly influence our understanding of any planets orbiting them. Using the Harvard University's Clay 0.4m telescope and Fred Lawrence Whipple Observatory’s 1.2m telescope on Mount Hopkins, Arizona, we conducted follow-up multi-band photometric observations of EB candidates from the Kilodegree Extremely Little Telescope (KELT) survey and the Kepler mission. We will present our follow-up observations and AstroImageJ analysis on these 5 EB systems.

  4. Image irradiance distribution in the 3MI wide field of view polarimeter

    NASA Astrophysics Data System (ADS)

    Gabrieli, Riccardo; Bartoli, Alessandro; Maiorano, Michele; Bruno, Umberto; Olivieri, Monica; Calamai, Luciano; Manolis, Ilias; Labate, Demetrio

    2015-09-01

    The Multi-Viewing, Multi-Channel, Multi-Polarisation Imager (3MI) is an imaging radiometer for the ESA/Eumetsat MeteOp-SG programme. Based on the heritage of the POLDER/PARASOL instrument, 3MI is designed to collect global observations of the top-of-atmosphere polarised bi-directional reflectance distribution function in 12 spectral bands, by observing the same target from multiple views using a pushbroom scanning concept. The demanding challenge of the 3MI optical design is represented by the polarisation and image irradiance fall-off (throughput uniformity) requirements. In a generic optical system, the image irradiance fall-off is a function of: target radiance distribution and polarisation, entrance pupil size and optical transmittance variations across the field of view (FOV), distortion and vignetting. In most applications these aspects can be considered as independent; however, when high image irradiance uniformity is required, they have to be considered as linked together. This is particularly true in case of a wide FOV polarimeter as 3MI is. In order to properly account for these aspects, an irradiance fall-off analytical model has been developed in the frame of 3MI Optics Pre-Development (OPD), whose aim is to mitigate any technological risks associated with the 3MI instrument development. It is shown how it is possible to control the image irradiance distribution acting on optical design parameters (e.g. distortion and entrance pupil size variation with FOV). Moreover, the impact of polarisation performances on irradiance fall-off is discussed.

  5. MERGING GALAXY CLUSTERS: OFFSET BETWEEN THE SUNYAEV-ZEL'DOVICH EFFECT AND X-RAY PEAKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molnar, Sandor M.; Hearn, Nathan C.; Stadel, Joachim G., E-mail: sandor@phys.ntu.edu.tw

    2012-03-20

    Galaxy clusters, the most massive collapsed structures, have been routinely used to determine cosmological parameters. When using clusters for cosmology, the crucial assumption is that they are relaxed. However, subarcminute resolution Sunyaev-Zel'dovich (SZ) effect images compared with high-resolution X-ray images of some clusters show significant offsets between the two peaks. We have carried out self-consistent N-body/hydrodynamical simulations of merging galaxy clusters using FLASH to study these offsets quantitatively. We have found that significant displacements result between the SZ and X-ray peaks for large relative velocities for all masses used in our simulations as long as the impact parameters were aboutmore » 100-250 kpc. Our results suggest that the SZ peak coincides with the peak in the pressure times the line-of-sight characteristic length and not the pressure maximum (as it would for clusters in equilibrium). The peak in the X-ray emission, as expected, coincides with the density maximum of the main cluster. As a consequence, the morphology of the SZ signal, and therefore the offset between the SZ and X-ray peaks, change with viewing angle. As an application, we compare the morphologies of our simulated images to observed SZ and X-ray images and mass surface densities derived from weak-lensing observations of the merging galaxy cluster CL0152-1357, we find that a large relative velocity of 4800 km s{sup -1} is necessary to explain the observations. We conclude that an analysis of the morphologies of multi-frequency observations of merging clusters can be used to put meaningful constraints on the initial parameters of the progenitors.« less

  6. Development of a generalized multi-pixel and multi-parameter satellite remote sensing algorithm for aerosol properties

    NASA Astrophysics Data System (ADS)

    Hashimoto, M.; Nakajima, T.; Takenaka, H.; Higurashi, A.

    2013-12-01

    We develop a new satellite remote sensing algorithm to retrieve the properties of aerosol particles in the atmosphere. In late years, high resolution and multi-wavelength, and multiple-angle observation data have been obtained by grand-based spectral radiometers and imaging sensors on board the satellite. With this development, optimized multi-parameter remote sensing methods based on the Bayesian theory have become popularly used (Turchin and Nozik, 1969; Rodgers, 2000; Dubovik et al., 2000). Additionally, a direct use of radiation transfer calculation has been employed for non-linear remote sensing problems taking place of look up table methods supported by the progress of computing technology (Dubovik et al., 2011; Yoshida et al., 2011). We are developing a flexible multi-pixel and multi-parameter remote sensing algorithm for aerosol optical properties. In this algorithm, the inversion method is a combination of the MAP method (Maximum a posteriori method, Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, we include a radiation transfer calculation code, Rstar (Nakajima and Tanaka, 1986, 1988), numerically solved each time in iteration for solution search. The Rstar-code has been directly used in the AERONET operational processing system (Dubovik and King, 2000). Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine mode, sea salt, and dust particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area. We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. In this test, we simulated satellite-observed radiances for a sub-domain consisting of 5 by 5 pixels by the Rstar code assuming wavelengths of 380, 674, 870 and 1600 [nm], atmospheric condition of the US standard atmosphere, and the several aerosol and ground surface conditions. The result of the experiment showed that AOTs of fine mode and dust particles, soot fraction and ground surface albedo at the wavelength of 674 [nm] are retrieved within absolute value differences of 0.04, 0.01, 0.06 and 0.006 from the true value, respectively, for the case of dark surface, and also, for the case of blight surface, 0.06, 0.03, 0.04 and 0.10 from the true value, respectively. We will conduct more tests to study the information contents of parameters needed for aerosol and land surface remote sensing with different boundary conditions among sub-domains.

  7. Research of flaw image collecting and processing technology based on multi-baseline stereo imaging

    NASA Astrophysics Data System (ADS)

    Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan

    2008-03-01

    Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.

  8. Super-resolution mapping using multi-viewing CHRIS/PROBA data

    NASA Astrophysics Data System (ADS)

    Dwivedi, Manish; Kumar, Vinay

    2016-04-01

    High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.

  9. Study of gas production from shale reservoirs with multi-stage hydraulic fracturing horizontal well considering multiple transport mechanisms.

    PubMed

    Guo, Chaohua; Wei, Mingzhen; Liu, Hong

    2018-01-01

    Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs' production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs.

  10. Study of gas production from shale reservoirs with multi-stage hydraulic fracturing horizontal well considering multiple transport mechanisms

    PubMed Central

    Wei, Mingzhen; Liu, Hong

    2018-01-01

    Development of unconventional shale gas reservoirs (SGRs) has been boosted by the advancements in two key technologies: horizontal drilling and multi-stage hydraulic fracturing. A large number of multi-stage fractured horizontal wells (MsFHW) have been drilled to enhance reservoir production performance. Gas flow in SGRs is a multi-mechanism process, including: desorption, diffusion, and non-Darcy flow. The productivity of the SGRs with MsFHW is influenced by both reservoir conditions and hydraulic fracture properties. However, rare simulation work has been conducted for multi-stage hydraulic fractured SGRs. Most of them use well testing methods, which have too many unrealistic simplifications and assumptions. Also, no systematical work has been conducted considering all reasonable transport mechanisms. And there are very few works on sensitivity studies of uncertain parameters using real parameter ranges. Hence, a detailed and systematic study of reservoir simulation with MsFHW is still necessary. In this paper, a dual porosity model was constructed to estimate the effect of parameters on shale gas production with MsFHW. The simulation model was verified with the available field data from the Barnett Shale. The following mechanisms have been considered in this model: viscous flow, slip flow, Knudsen diffusion, and gas desorption. Langmuir isotherm was used to simulate the gas desorption process. Sensitivity analysis on SGRs’ production performance with MsFHW has been conducted. Parameters influencing shale gas production were classified into two categories: reservoir parameters including matrix permeability, matrix porosity; and hydraulic fracture parameters including hydraulic fracture spacing, and fracture half-length. Typical ranges of matrix parameters have been reviewed. Sensitivity analysis have been conducted to analyze the effect of the above factors on the production performance of SGRs. Through comparison, it can be found that hydraulic fracture parameters are more sensitive compared with reservoir parameters. And reservoirs parameters mainly affect the later production period. However, the hydraulic fracture parameters have a significant effect on gas production from the early period. The results of this study can be used to improve the efficiency of history matching process. Also, it can contribute to the design and optimization of hydraulic fracture treatment design in unconventional SGRs. PMID:29320489

  11. FUNSTAT and statistical image representations

    NASA Technical Reports Server (NTRS)

    Parzen, E.

    1983-01-01

    General ideas of functional statistical inference analysis of one sample and two samples, univariate and bivariate are outlined. ONESAM program is applied to analyze the univariate probability distributions of multi-spectral image data.

  12. Comparison of three‐dimensional analysis and stereological techniques for quantifying lithium‐ion battery electrode microstructures

    PubMed Central

    TAIWO, OLUWADAMILOLA O.; FINEGAN, DONAL P.; EASTWOOD, DAVID S.; FIFE, JULIE L.; BROWN, LEON D.; DARR, JAWWAD A.; LEE, PETER D.; BRETT, DANIEL J.L.

    2016-01-01

    Summary Lithium‐ion battery performance is intrinsically linked to electrode microstructure. Quantitative measurement of key structural parameters of lithium‐ion battery electrode microstructures will enable optimization as well as motivate systematic numerical studies for the improvement of battery performance. With the rapid development of 3‐D imaging techniques, quantitative assessment of 3‐D microstructures from 2‐D image sections by stereological methods appears outmoded; however, in spite of the proliferation of tomographic imaging techniques, it remains significantly easier to obtain two‐dimensional (2‐D) data sets. In this study, stereological prediction and three‐dimensional (3‐D) analysis techniques for quantitative assessment of key geometric parameters for characterizing battery electrode microstructures are examined and compared. Lithium‐ion battery electrodes were imaged using synchrotron‐based X‐ray tomographic microscopy. For each electrode sample investigated, stereological analysis was performed on reconstructed 2‐D image sections generated from tomographic imaging, whereas direct 3‐D analysis was performed on reconstructed image volumes. The analysis showed that geometric parameter estimation using 2‐D image sections is bound to be associated with ambiguity and that volume‐based 3‐D characterization of nonconvex, irregular and interconnected particles can be used to more accurately quantify spatially‐dependent parameters, such as tortuosity and pore‐phase connectivity. PMID:26999804

  13. Comparison of three-dimensional analysis and stereological techniques for quantifying lithium-ion battery electrode microstructures.

    PubMed

    Taiwo, Oluwadamilola O; Finegan, Donal P; Eastwood, David S; Fife, Julie L; Brown, Leon D; Darr, Jawwad A; Lee, Peter D; Brett, Daniel J L; Shearing, Paul R

    2016-09-01

    Lithium-ion battery performance is intrinsically linked to electrode microstructure. Quantitative measurement of key structural parameters of lithium-ion battery electrode microstructures will enable optimization as well as motivate systematic numerical studies for the improvement of battery performance. With the rapid development of 3-D imaging techniques, quantitative assessment of 3-D microstructures from 2-D image sections by stereological methods appears outmoded; however, in spite of the proliferation of tomographic imaging techniques, it remains significantly easier to obtain two-dimensional (2-D) data sets. In this study, stereological prediction and three-dimensional (3-D) analysis techniques for quantitative assessment of key geometric parameters for characterizing battery electrode microstructures are examined and compared. Lithium-ion battery electrodes were imaged using synchrotron-based X-ray tomographic microscopy. For each electrode sample investigated, stereological analysis was performed on reconstructed 2-D image sections generated from tomographic imaging, whereas direct 3-D analysis was performed on reconstructed image volumes. The analysis showed that geometric parameter estimation using 2-D image sections is bound to be associated with ambiguity and that volume-based 3-D characterization of nonconvex, irregular and interconnected particles can be used to more accurately quantify spatially-dependent parameters, such as tortuosity and pore-phase connectivity. © 2016 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  14. Tunnel Detection Using Seismic Methods

    NASA Astrophysics Data System (ADS)

    Miller, R.; Park, C. B.; Xia, J.; Ivanov, J.; Steeples, D. W.; Ryden, N.; Ballard, R. F.; Llopis, J. L.; Anderson, T. S.; Moran, M. L.; Ketcham, S. A.

    2006-05-01

    Surface seismic methods have shown great promise for use in detecting clandestine tunnels in areas where unauthorized movement beneath secure boundaries have been or are a matter of concern for authorities. Unauthorized infiltration beneath national borders and into or out of secure facilities is possible at many sites by tunneling. Developments in acquisition, processing, and analysis techniques using multi-channel seismic imaging have opened the door to a vast number of near-surface applications including anomaly detection and delineation, specifically tunnels. Body waves have great potential based on modeling and very preliminary empirical studies trying to capitalize on diffracted energy. A primary limitation of all seismic energy is the natural attenuation of high-frequency energy by earth materials and the difficulty in transmitting a high- amplitude source pulse with a broad spectrum above 500 Hz into the earth. Surface waves have shown great potential since the development of multi-channel analysis methods (e.g., MASW). Both shear-wave velocity and backscatter energy from surface waves have been shown through modeling and empirical studies to have great promise in detecting the presence of anomalies, such as tunnels. Success in developing and evaluating various seismic approaches for detecting tunnels relies on investigations at known tunnel locations, in a variety of geologic settings, employing a wide range of seismic methods, and targeting a range of uniquely different tunnel geometries, characteristics, and host lithologies. Body-wave research at the Moffat tunnels in Winter Park, Colorado, provided well-defined diffraction-looking events that correlated with the subsurface location of the tunnel complex. Natural voids related to karst have been studied in Kansas, Oklahoma, Alabama, and Florida using shear-wave velocity imaging techniques based on the MASW approach. Manmade tunnels, culverts, and crawl spaces have been the target of multi-modal analysis in Kansas and California. Clandestine tunnels used for illegal entry into the U.S. from Mexico were studied at two different sites along the southern border of California. All these studies represent the empirical basis for suggesting surface seismic has a significant role to play in tunnel detection and that methods are under development and very nearly at hand that will provide an effective tool in appraising and maintaining parameter security. As broadband sources, gravity-coupled towed spreads, and automated analysis software continues to make advancements, so does the applicability of routine deployment of seismic imaging systems that can be operated by technicians with interpretation aids for nearly real-time target selection. Key to making these systems commercial is the development of enhanced imaging techniques in geologically noisy areas and highly variable surface terrain.

  15. Sharpening Ejecta Patterns: Investigating Spectral Fidelity After Controlled Intensity-Hue-Saturation Image Fusion of LROC Images of Fresh Craters

    NASA Astrophysics Data System (ADS)

    Awumah, A.; Mahanti, P.; Robinson, M. S.

    2017-12-01

    Image fusion is often used in Earth-based remote sensing applications to merge spatial details from a high-resolution panchromatic (Pan) image with the color information from a lower-resolution multi-spectral (MS) image, resulting in a high-resolution multi-spectral image (HRMS). Previously, the performance of six well-known image fusion methods were compared using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images (1). Results showed the Intensity-Hue-Saturation (IHS) method provided the best spatial performance, but deteriorated the spectral content. In general, there was a trade-off between spatial enhancement and spectral fidelity from the fusion process; the more spatial details from the Pan fused with the MS image, the more spectrally distorted the final HRMS. In this work, we control the amount of spatial details fused (from the LROC NAC images to WAC images) using a controlled IHS method (2), to investigate the spatial variation in spectral distortion on fresh crater ejecta. In the controlled IHS method (2), the percentage of the Pan component merged with the MS is varied. The percent of spatial detail from the Pan used is determined by a variable whose value may be varied between 1 (no Pan utilized) to infinity (entire Pan utilized). An HRMS color composite image (red=415nm, green=321/415nm, blue=321/360nm (3)) was used to assess performance (via visual inspection and metric-based evaluations) at each tested value of the control parameter (1 to 10—after which spectral distortion saturates—in 0.01 increments) within three regions: crater interiors, ejecta blankets, and the background material surrounding the craters. Increasing the control parameter introduced increased spatial sharpness and spectral distortion in all regions, but to varying degrees. Crater interiors suffered the most color distortion, while ejecta experienced less color distortion. The controlled IHS method is therefore desirable for resolution-enhancement of fresh crater ejecta; larger values of the control parameter may be used to sharpen MS images of ejecta patterns but with less impact to color distortion than in the uncontrolled IHS fusion process. References: (1) Prasun et. al (2016) ISPRS. (2) Choi, Myungjin (2006) IEEE. (3) Denevi et. al (2014) JGR.

  16. Simultaneous measurements of kinematics and fMRI: compatibility assessment and case report on recovery evaluation of one stroke patient.

    PubMed

    Casellato, Claudia; Ferrante, Simona; Gandolla, Marta; Volonterio, Nicola; Ferrigno, Giancarlo; Baselli, Giuseppe; Frattini, Tiziano; Martegani, Alberto; Molteni, Franco; Pedrocchi, Alessandra

    2010-09-23

    Correlating the features of the actual executed movement with the associated cortical activations can enhance the reliability of the functional Magnetic Resonance Imaging (fMRI) data interpretation. This is crucial for longitudinal evaluation of motor recovery in neurological patients and for investigating detailed mutual interactions between activation maps and movement parameters.Therefore, we have explored a new set-up combining fMRI with an optoelectronic motion capture system, which provides a multi-parameter quantification of the performed motor task. The cameras of the motion system were mounted inside the MR room and passive markers were placed on the subject skin, without any risk or encumbrance. The versatile set-up allows 3-dimensional multi-segment acquisitions including recording of possible mirror movements, and it guarantees a high inter-sessions repeatability.We demonstrated the integrated set-up reliability through compatibility tests. Then, an fMRI block-design protocol combined with kinematic recordings was tested on a healthy volunteer performing finger tapping and ankle dorsal- plantar-flexion. A preliminary assessment of clinical applicability and perspectives was carried out by pre- and post rehabilitation acquisitions on a hemiparetic patient performing ankle dorsal- plantar-flexion. For all sessions, the proposed method integrating kinematic data into the model design was compared with the standard analysis. Phantom acquisitions demonstrated the not-compromised image quality. Healthy subject sessions showed the protocols feasibility and the model reliability with the kinematic regressor. The patient results showed that brain activation maps were more consistent when the images analysis included in the regression model, besides the stimuli, the kinematic regressor quantifying the actual executed movement (movement timing and amplitude), proving a significant model improvement. Moreover, concerning motor recovery evaluation, after one rehabilitation month, a greater cortical area was activated during exercise, in contrast to the usual focalization associated with functional recovery. Indeed, the availability of kinematics data allows to correlate this wider area with a higher frequency and a larger amplitude of movement. The kinematic acquisitions resulted to be reliable and versatile to enrich the fMRI images information and therefore the evaluation of motor recovery in neurological patients where large differences between required and performed motion can be expected.

  17. Simultaneous measurements of kinematics and fMRI: compatibility assessment and case report on recovery evaluation of one stroke patient

    PubMed Central

    2010-01-01

    Background Correlating the features of the actual executed movement with the associated cortical activations can enhance the reliability of the functional Magnetic Resonance Imaging (fMRI) data interpretation. This is crucial for longitudinal evaluation of motor recovery in neurological patients and for investigating detailed mutual interactions between activation maps and movement parameters. Therefore, we have explored a new set-up combining fMRI with an optoelectronic motion capture system, which provides a multi-parameter quantification of the performed motor task. Methods The cameras of the motion system were mounted inside the MR room and passive markers were placed on the subject skin, without any risk or encumbrance. The versatile set-up allows 3-dimensional multi-segment acquisitions including recording of possible mirror movements, and it guarantees a high inter-sessions repeatability. We demonstrated the integrated set-up reliability through compatibility tests. Then, an fMRI block-design protocol combined with kinematic recordings was tested on a healthy volunteer performing finger tapping and ankle dorsal- plantar-flexion. A preliminary assessment of clinical applicability and perspectives was carried out by pre- and post rehabilitation acquisitions on a hemiparetic patient performing ankle dorsal- plantar-flexion. For all sessions, the proposed method integrating kinematic data into the model design was compared with the standard analysis. Results Phantom acquisitions demonstrated the not-compromised image quality. Healthy subject sessions showed the protocols feasibility and the model reliability with the kinematic regressor. The patient results showed that brain activation maps were more consistent when the images analysis included in the regression model, besides the stimuli, the kinematic regressor quantifying the actual executed movement (movement timing and amplitude), proving a significant model improvement. Moreover, concerning motor recovery evaluation, after one rehabilitation month, a greater cortical area was activated during exercise, in contrast to the usual focalization associated with functional recovery. Indeed, the availability of kinematics data allows to correlate this wider area with a higher frequency and a larger amplitude of movement. Conclusions The kinematic acquisitions resulted to be reliable and versatile to enrich the fMRI images information and therefore the evaluation of motor recovery in neurological patients where large differences between required and performed motion can be expected. PMID:20863391

  18. Characterising volcanic cycles at Soufriere Hills Volcano, Montserrat: Time series analysis of multi-parameter satellite data

    NASA Astrophysics Data System (ADS)

    Flower, Verity J. B.; Carn, Simon A.

    2015-10-01

    The identification of cyclic volcanic activity can elucidate underlying eruption dynamics and aid volcanic hazard mitigation. Whilst satellite datasets are often analysed individually, here we exploit the multi-platform NASA A-Train satellite constellation to cross-correlate cyclical signals identified using complementary measurement techniques at Soufriere Hills Volcano (SHV), Montserrat. In this paper we present a Multi-taper (MTM) Fast Fourier Transform (FFT) analysis of coincident SO2 and thermal infrared (TIR) satellite measurements at SHV facilitating the identification of cyclical volcanic behaviour. These measurements were collected by the Ozone Monitoring Instrument (OMI) and Moderate Resolution Imaging Spectroradiometer (MODIS) (respectively) in the A-Train. We identify a correlating cycle in both the OMI and MODIS data (54-58 days), with this multi-week feature attributable to episodes of dome growth. The 50 day cycles were also identified in ground-based SO2 data at SHV, confirming the validity of our analysis and further corroborating the presence of this cycle at the volcano. In addition a 12 day cycle was identified in the OMI data, previously attributed to variable lava effusion rates on shorter timescales. OMI data also display a one week (7-8 days) cycle attributable to cyclical variations in viewing angle resulting from the orbital characteristics of the Aura satellite. Longer period cycles possibly relating to magma intrusion were identified in the OMI record (102-, 121-, and 159 days); in addition to a 238-day cycle identified in the MODIS data corresponding to periodic destabilisation of the lava dome. Through the analysis of reconstructions generated from cycles identified in the OMI and MODIS data, periods of unrest were identified, including the major dome collapse of 20th May 2006 and significant explosive event of 3rd January 2009. Our analysis confirms the potential for identification of cyclical volcanic activity through combined analysis of satellite data, which would be of particular value at poorly monitored volcanic systems.

  19. Satellite image simulations for model-supervised, dynamic retrieval of crop type and land use intensity

    NASA Astrophysics Data System (ADS)

    Bach, H.; Klug, P.; Ruf, T.; Migdall, S.; Schlenz, F.; Hank, T.; Mauser, W.

    2015-04-01

    To support food security, information products about the actual cropping area per crop type, the current status of agricultural production and estimated yields, as well as the sustainability of the agricultural management are necessary. Based on this information, well-targeted land management decisions can be made. Remote sensing is in a unique position to contribute to this task as it is globally available and provides a plethora of information about current crop status. M4Land is a comprehensive system in which a crop growth model (PROMET) and a reflectance model (SLC) are coupled in order to provide these information products by analyzing multi-temporal satellite images. SLC uses modelled surface state parameters from PROMET, such as leaf area index or phenology of different crops to simulate spatially distributed surface reflectance spectra. This is the basis for generating artificial satellite images considering sensor specific configurations (spectral bands, solar and observation geometries). Ensembles of model runs are used to represent different crop types, fertilization status, soil colour and soil moisture. By multi-temporal comparisons of simulated and real satellite images, the land cover/crop type can be classified in a dynamically, model-supervised way and without in-situ training data. The method is demonstrated in an agricultural test-site in Bavaria. Its transferability is studied by analysing PROMET model results for the rest of Germany. Especially the simulated phenological development can be verified on this scale in order to understand whether PROMET is able to adequately simulate spatial, as well as temporal (intra- and inter-season) crop growth conditions, a prerequisite for the model-supervised approach. This sophisticated new technology allows monitoring of management decisions on the field-level using high resolution optical data (presently RapidEye and Landsat). The M4Land analysis system is designed to integrate multi-mission data and is well suited for the use of Sentinel-2's continuous and manifold data stream.

  20. Re-use of pilot data and interim analysis of pivotal data in MRMC studies: a simulation study

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Samuelson, Frank; Sahiner, Berkman; Petrick, Nicholas

    2017-03-01

    Novel medical imaging devices are often evaluated with multi-reader multi-case (MRMC) studies in which radiologists read images of patient cases for a specified clinical task (e.g., cancer detection). A pilot study is often used to measure the effect size and variance parameters that are necessary for sizing a pivotal study (including sizing readers, non-diseased and diseased cases). Due to the practical difficulty of collecting patient cases or recruiting clinical readers, some investigators attempt to include the pilot data as part of their pivotal study. In other situations, some investigators attempt to perform an interim analysis of their pivotal study data based upon which the sample sizes may be re-estimated. Re-use of the pilot data or interim analyses of the pivotal data may inflate the type I error of the pivotal study. In this work, we use the Roe and Metz model to simulate MRMC data under the null hypothesis (i.e., two devices have equal diagnostic performance) and investigate the type I error rate for several practical designs involving re-use of pilot data or interim analysis of pivotal data. Our preliminary simulation results indicate that, under the simulation conditions we investigated, the inflation of type I error is none or only marginal for some design strategies (e.g., re-use of patient data without re-using readers, and size re-estimation without using the effect-size estimated in the interim analysis). Upon further verifications, these are potentially useful design methods in that they may help make a study less burdensome and have a better chance to succeed without substantial loss of the statistical rigor.

  1. Deriving urban dynamic evolution rules from self-adaptive cellular automata with multi-temporal remote sensing images

    NASA Astrophysics Data System (ADS)

    He, Yingqing; Ai, Bin; Yao, Yao; Zhong, Fajun

    2015-06-01

    Cellular automata (CA) have proven to be very effective for simulating and predicting the spatio-temporal evolution of complex geographical phenomena. Traditional methods generally pose problems in determining the structure and parameters of CA for a large, complex region or a long-term simulation. This study presents a self-adaptive CA model integrated with an artificial immune system to discover dynamic transition rules automatically. The model's parameters are allowed to be self-modified with the application of multi-temporal remote sensing images: that is, the CA can adapt itself to the changed and complex environment. Therefore, urban dynamic evolution rules over time can be efficiently retrieved by using this integrated model. The proposed AIS-based CA model was then used to simulate the rural-urban land conversion of Guangzhou city, located in the core of China's Pearl River Delta. The initial urban land was directly classified from TM satellite image in the year 1990. Urban land in the years 1995, 2000, 2005, 2009 and 2012 was correspondingly used as the observed data to calibrate the model's parameters. With the quantitative index figure of merit (FoM) and pattern similarity, the comparison was further performed between the AIS-based model and a Logistic CA model. The results indicate that the AIS-based CA model can perform better and with higher precision in simulating urban evolution, and the simulated spatial pattern is closer to the actual development situation.

  2. High-resolution time-frequency representation of EEG data using multi-scale wavelets

    NASA Astrophysics Data System (ADS)

    Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina

    2017-09-01

    An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.

  3. Multi-resolution analysis for region of interest extraction in thermographic nondestructive evaluation

    NASA Astrophysics Data System (ADS)

    Ortiz-Jaramillo, B.; Fandiño Toro, H. A.; Benitez-Restrepo, H. D.; Orjuela-Vargas, S. A.; Castellanos-Domínguez, G.; Philips, W.

    2012-03-01

    Infrared Non-Destructive Testing (INDT) is known as an effective and rapid method for nondestructive inspection. It can detect a broad range of near-surface structuring flaws in metallic and composite components. Those flaws are modeled as a smooth contour centered at peaks of stored thermal energy, termed Regions of Interest (ROI). Dedicated methodologies must detect the presence of those ROIs. In this paper, we present a methodology for ROI extraction in INDT tasks. The methodology deals with the difficulties due to the non-uniform heating. The non-uniform heating affects low spatial/frequencies and hinders the detection of relevant points in the image. In this paper, a methodology for ROI extraction in INDT using multi-resolution analysis is proposed, which is robust to ROI low contrast and non-uniform heating. The former methodology includes local correlation, Gaussian scale analysis and local edge detection. In this methodology local correlation between image and Gaussian window provides interest points related to ROIs. We use a Gaussian window because thermal behavior is well modeled by Gaussian smooth contours. Also, the Gaussian scale is used to analyze details in the image using multi-resolution analysis avoiding low contrast, non-uniform heating and selection of the Gaussian window size. Finally, local edge detection is used to provide a good estimation of the boundaries in the ROI. Thus, we provide a methodology for ROI extraction based on multi-resolution analysis that is better or equal compared with the other dedicate algorithms proposed in the state of art.

  4. Quality parameters analysis of optical imaging systems with enhanced focal depth using the Wigner distribution function

    PubMed

    Zalvidea; Colautti; Sicre

    2000-05-01

    An analysis of the Strehl ratio and the optical transfer function as imaging quality parameters of optical elements with enhanced focal length is carried out by employing the Wigner distribution function. To this end, we use four different pupil functions: a full circular aperture, a hyper-Gaussian aperture, a quartic phase plate, and a logarithmic phase mask. A comparison is performed between the quality parameters and test images formed by these pupil functions at different defocus distances.

  5. Scanning ion conductance microscopy: a convergent high-resolution technology for multi-parametric analysis of living cardiovascular cells

    PubMed Central

    Miragoli, Michele; Moshkov, Alexey; Novak, Pavel; Shevchuk, Andrew; Nikolaev, Viacheslav O.; El-Hamamsy, Ismail; Potter, Claire M. F.; Wright, Peter; Kadir, S.H. Sheikh Abdul; Lyon, Alexander R.; Mitchell, Jane A.; Chester, Adrian H.; Klenerman, David; Lab, Max J.; Korchev, Yuri E.; Harding, Sian E.; Gorelik, Julia

    2011-01-01

    Cardiovascular diseases are complex pathologies that include alterations of various cell functions at the levels of intact tissue, single cells and subcellular signalling compartments. Conventional techniques to study these processes are extremely divergent and rely on a combination of individual methods, which usually provide spatially and temporally limited information on single parameters of interest. This review describes scanning ion conductance microscopy (SICM) as a novel versatile technique capable of simultaneously reporting various structural and functional parameters at nanometre resolution in living cardiovascular cells at the level of the whole tissue, single cells and at the subcellular level, to investigate the mechanisms of cardiovascular disease. SICM is a multimodal imaging technology that allows concurrent and dynamic analysis of membrane morphology and various functional parameters (cell volume, membrane potentials, cellular contraction, single ion-channel currents and some parameters of intracellular signalling) in intact living cardiovascular cells and tissues with nanometre resolution at different levels of organization (tissue, cellular and subcellular levels). Using this technique, we showed that at the tissue level, cell orientation in the inner and outer aortic arch distinguishes atheroprone and atheroprotected regions. At the cellular level, heart failure leads to a pronounced loss of T-tubules in cardiac myocytes accompanied by a reduction in Z-groove ratio. We also demonstrated the capability of SICM to measure the entire cell volume as an index of cellular hypertrophy. This method can be further combined with fluorescence to simultaneously measure cardiomyocyte contraction and intracellular calcium transients or to map subcellular localization of membrane receptors coupled to cyclic adenosine monophosphate production. The SICM pipette can be used for patch-clamp recordings of membrane potential and single channel currents. In conclusion, SICM provides a highly informative multimodal imaging platform for functional analysis of the mechanisms of cardiovascular diseases, which should facilitate identification of novel therapeutic strategies. PMID:21325316

  6. Wave analysis of a plenoptic system and its applications

    NASA Astrophysics Data System (ADS)

    Shroff, Sapna A.; Berkner, Kathrin

    2013-03-01

    Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.

  7. Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method.

    PubMed

    Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping

    2017-04-03

    Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.

  8. A method of extracting speed-dependent vector correlations from 2 + 1 REMPI ion images.

    PubMed

    Wei, Wei; Wallace, Colin J; Grubb, Michael P; North, Simon W

    2017-07-07

    We present analytical expressions for extracting Dixon's bipolar moments in the semi-classical limit from experimental anisotropy parameters of sliced or reconstructed non-sliced images. The current method focuses on images generated by 2 + 1 REMPI (Resonance Enhanced Multi-photon Ionization) and is a necessary extension of our previously published 1 + 1 REMPI equations. Two approaches for applying the new equations, direct inversion and forward convolution, are presented. As demonstration of the new method, bipolar moments were extracted from images of carbonyl sulfide (OCS) photodissociation at 230 nm and NO 2 photodissociation at 355 nm, and the results are consistent with previous publications.

  9. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  10. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  11. Correlation between the Temperature Dependence of Intrsinsic Mr Parameters and Thermal Dose Measured by a Rapid Chemical Shift Imaging Technique

    PubMed Central

    Taylor, Brian A.; Elliott, Andrew M.; Hwang, Ken-Pin; Hazle, John D.; Stafford, R. Jason

    2011-01-01

    In order to investigate simultaneous MR temperature imaging and direct validation of tissue damage during thermal therapy, temperature-dependent signal changes in proton resonance frequency (PRF) shifts, R2* values, and T1-weighted amplitudes are measured from one technique in ex vivo tissue heated with a 980-nm laser at 1.5T and 3.0T. Using a multi-gradient echo acquisition and signal modeling with the Stieglitz-McBride algorithm, the temperature sensitivity coefficient (TSC) values of these parameters are measured in each tissue at high spatiotemporal resolutions (1.6×1.6×4mm3,≤5sec) at the range of 25-61 °C. Non-linear changes in MR parameters are examined and correlated with an Arrhenius rate dose model of thermal damage. Using logistic regression, the probability of changes in these parameters is calculated as a function of thermal dose to determine if changes correspond to thermal damage. Temperature calibrations demonstrate TSC values which are consistent with previous studies. Temperature sensitivity of R2* and, in some cases, T1-weighted amplitudes are statistically different before and after thermal damage occurred. Significant changes in the slopes of R2* as a function of temperature are observed. Logistic regression analysis shows that these changes could be accurately predicted using the Arrhenius rate dose model (Ω=1.01±0.03), thereby showing that the changes in R2* could be direct markers of protein denaturation. Overall, by using a chemical shift imaging technique with simultaneous temperature estimation, R2* mapping and T1-W imaging, it is shown that changes in the sensitivity of R2* and, to a lesser degree, T1-W amplitudes are measured in ex vivo tissue when thermal damage is expected to occur according to Arrhenius rate dose models. These changes could possibly be used for direct validation of thermal damage in contrast to model-based predictions. PMID:21721063

  12. Osteoporosis imaging: effects of bone preservation on MDCT-based trabecular bone microstructure parameters and finite element models.

    PubMed

    Baum, Thomas; Grande Garcia, Eduardo; Burgkart, Rainer; Gordijenko, Olga; Liebl, Hans; Jungmann, Pia M; Gruber, Michael; Zahel, Tina; Rummeny, Ernst J; Waldt, Simone; Bauer, Jan S

    2015-06-26

    Osteoporosis is defined as a skeletal disorder characterized by compromised bone strength due to a reduction of bone mass and deterioration of bone microstructure predisposing an individual to an increased risk of fracture. Trabecular bone microstructure analysis and finite element models (FEM) have shown to improve the prediction of bone strength beyond bone mineral density (BMD) measurements. These computational methods have been developed and validated in specimens preserved in formalin solution or by freezing. However, little is known about the effects of preservation on trabecular bone microstructure and FEM. The purpose of this observational study was to investigate the effects of preservation on trabecular bone microstructure and FEM in human vertebrae. Four thoracic vertebrae were harvested from each of three fresh human cadavers (n=12). Multi-detector computed tomography (MDCT) images were obtained at baseline, 3 and 6 month follow-up. In the intervals between MDCT imaging, two vertebrae from each donor were formalin-fixed and frozen, respectively. BMD, trabecular bone microstructure parameters (histomorphometry and fractal dimension), and FEM-based apparent compressive modulus (ACM) were determined in the MDCT images and validated by mechanical testing to failure of the vertebrae after 6 months. Changes of BMD, trabecular bone microstructure parameters, and FEM-based ACM in formalin-fixed and frozen vertebrae over 6 months ranged between 1.0-5.6% and 1.3-6.1%, respectively, and were not statistically significant (p>0.05). BMD, trabecular bone microstructure parameters, and FEM-based ACM as assessed at baseline, 3 and 6 month follow-up correlated significantly with mechanically determined failure load (r=0.89-0.99; p<0.05). The correlation coefficients r were not significantly different for the two preservation methods (p>0.05). Formalin fixation and freezing up to six months showed no significant effects on trabecular bone microstructure and FEM-based ACM in human vertebrae and may both be used in corresponding in-vitro experiments in the context of osteoporosis.

  13. 3D tensor-based blind multispectral image decomposition for tumor demarcation

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun

    2010-03-01

    Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).

  14. Ultra-high spatial resolution multi-energy CT using photon counting detector technology

    NASA Astrophysics Data System (ADS)

    Leng, S.; Gutjahr, R.; Ferrero, A.; Kappler, S.; Henning, A.; Halaweish, A.; Zhou, W.; Montoya, J.; McCollough, C.

    2017-03-01

    Two ultra-high-resolution (UHR) imaging modes, each with two energy thresholds, were implemented on a research, whole-body photon-counting-detector (PCD) CT scanner, referred to as sharp and UHR, respectively. The UHR mode has a pixel size of 0.25 mm at iso-center for both energy thresholds, with a collimation of 32 × 0.25 mm. The sharp mode has a 0.25 mm pixel for the low-energy threshold and 0.5 mm for the high-energy threshold, with a collimation of 48 × 0.25 mm. Kidney stones with mixed mineral composition and lung nodules with different shapes were scanned using both modes, and with the standard imaging mode, referred to as macro mode (0.5 mm pixel and 32 × 0.5 mm collimation). Evaluation and comparison of the three modes focused on the ability to accurately delineate anatomic structures using the high-spatial resolution capability and the ability to quantify stone composition using the multi-energy capability. The low-energy threshold images of the sharp and UHR modes showed better shape and texture information due to the achieved higher spatial resolution, although noise was also higher. No noticeable benefit was shown in multi-energy analysis using UHR compared to standard resolution (macro mode) when standard doses were used. This was due to excessive noise in the higher resolution images. However, UHR scans at higher dose showed improvement in multi-energy analysis over macro mode with regular dose. To fully take advantage of the higher spatial resolution in multi-energy analysis, either increased radiation dose, or application of noise reduction techniques, is needed.

  15. Bone texture analysis on dental radiographic images: results with several angulated radiographs on the same region of interest

    NASA Astrophysics Data System (ADS)

    Amouriq, Yves; Guedon, Jeanpierre; Normand, Nicolas; Arlicot, Aurore; Benhdech, Yassine; Weiss, Pierre

    2011-03-01

    Bone microarchitecture is the predictor of bone quality or bone disease. It can only be measured on a bone biopsy, which is invasive and not available for all clinical situations. Texture analysis on radiographs is a common way to investigate bone microarchitecture. But relationship between three-dimension histomorphometric parameters and two-dimension texture parameters is not always well known, with poor results. The aim of this study is to performed angulated radiographs of the same region of interest and see if a better relationship between texture analysis on several radiographs and histomorphometric parameters can be developed. Computed radiography images of dog (Beagle) mandible section in molar regions were compared with high-resolution micro-CT (Computed-Tomograph) volumes. Four radiographs with 27° angle (up, down, left, right, using Rinn ring and customized arm positioning system) were performed from initial radiograph position. Bone texture parameters were calculated on all images. Texture parameters were also computed from new images obtained by difference between angulated images. Results of fractal values in different trabecular areas give some caracterisation of bone microarchitecture.

  16. Automatic digital surface model (DSM) generation from aerial imagery data

    NASA Astrophysics Data System (ADS)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  17. Assessment of Abdominal Adipose Tissue and Organ Fat Content by Magnetic Resonance Imaging

    PubMed Central

    Hu, Houchun H.; Nayak, Krishna S.; Goran, Michael I.

    2010-01-01

    As the prevalence of obesity continues to rise, rapid and accurate tools for assessing abdominal body and organ fat quantity and distribution are critically needed to assist researchers investigating therapeutic and preventive measures against obesity and its comorbidities. Magnetic resonance imaging (MRI) is the most promising modality to address such need. It is non-invasive, utilizes no ionizing radiation, provides unmatched 3D visualization, is repeatable, and is applicable to subject cohorts of all ages. This article is aimed to provide the reader with an overview of current and state-of-the-art techniques in MRI and associated image analysis methods for fat quantification. The principles underlying traditional approaches such as T1-weighted imaging and magnetic resonance spectroscopy as well as more modern chemical-shift imaging techniques are discussed and compared. The benefits of contiguous 3D acquisitions over 2D multi-slice approaches are highlighted. Typical post-processing procedures for extracting adipose tissue depot volumes and percent organ fat content from abdominal MRI data sets are explained. Furthermore, the advantages and disadvantages of each MRI approach with respect to imaging parameters, spatial resolution, subject motion, scan time, and appropriate fat quantitative endpoints are also provided. Practical considerations in implementing these methods are also presented. PMID:21348916

  18. Optical perception for detection of cutaneous T-cell lymphoma by multi-spectral imaging

    NASA Astrophysics Data System (ADS)

    Hsiao, Yu-Ping; Wang, Hsiang-Chen; Chen, Shih-Hua; Tsai, Chung-Hung; Yang, Jen-Hung

    2014-12-01

    In this study, the spectrum of each picture element of the patient’s skin image was obtained by multi-spectral imaging technology. Spectra of normal or pathological skin were collected from 15 patients. Principal component analysis and principal component scores of skin spectra were employed to distinguish the spectral characteristics with different diseases. Finally, skin regions with suspected cutaneous T-cell lymphoma (CTCL) lesions were successfully predicted by evaluation and classification of the spectra of pathological skin. The sensitivity and specificity of this technique were 89.65% and 95.18% after the analysis of about 109 patients. The probability of atopic dermatitis and psoriasis patients misinterpreted as CTCL were 5.56% and 4.54%, respectively.

  19. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data

    NASA Astrophysics Data System (ADS)

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-01

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  20. Machine learning-based kinetic modeling: a robust and reproducible solution for quantitative analysis of dynamic PET data.

    PubMed

    Pan, Leyun; Cheng, Caixia; Haberkorn, Uwe; Dimitrakopoulou-Strauss, Antonia

    2017-05-07

    A variety of compartment models are used for the quantitative analysis of dynamic positron emission tomography (PET) data. Traditionally, these models use an iterative fitting (IF) method to find the least squares between the measured and calculated values over time, which may encounter some problems such as the overfitting of model parameters and a lack of reproducibility, especially when handling noisy data or error data. In this paper, a machine learning (ML) based kinetic modeling method is introduced, which can fully utilize a historical reference database to build a moderate kinetic model directly dealing with noisy data but not trying to smooth the noise in the image. Also, due to the database, the presented method is capable of automatically adjusting the models using a multi-thread grid parameter searching technique. Furthermore, a candidate competition concept is proposed to combine the advantages of the ML and IF modeling methods, which could find a balance between fitting to historical data and to the unseen target curve. The machine learning based method provides a robust and reproducible solution that is user-independent for VOI-based and pixel-wise quantitative analysis of dynamic PET data.

  1. Thermal Imaging of Flame in Air-assisted Atomizer for Burner System

    NASA Astrophysics Data System (ADS)

    Amirnordin, S. H.; Khalid, Amir; Zailan, M. F.; Fawzi, Mas; Salleh, Hamidon; Zaman, Izzuddin

    2017-08-01

    Infrared thermography was used as a part of non-intrusion technique on the flame temperature analysis. This paper demonstrates the technique to generate the thermal images of flame from the air-assisted atomizer. The multi-circular jet plate acts as a turbulence generator to improve the fuel and air mixing in the atomizer. Three types of multi-circular jet plate geometry were analysed at different equivalence ratio. Thermal infrared imaging using FLIR thermal camera were used to obtain the flame temperature. Multi-circular jet 1 shows the highest flame temperature obtained compared to other plates. It can be concluded that the geometry of the plate influences the combustion, hence affects the flame temperature profile from the air-assisted atomizer.

  2. Fast Physically Correct Refocusing for Sparse Light Fields Using Block-Based Multi-Rate View Interpolation.

    PubMed

    Huang, Chao-Tsung; Wang, Yu-Wen; Huang, Li-Ren; Chin, Jui; Chen, Liang-Gee

    2017-02-01

    Digital refocusing has a tradeoff between complexity and quality when using sparsely sampled light fields for low-storage applications. In this paper, we propose a fast physically correct refocusing algorithm to address this issue in a twofold way. First, view interpolation is adopted to provide photorealistic quality at infocus-defocus hybrid boundaries. Regarding its conventional high complexity, we devised a fast line-scan method specifically for refocusing, and its 1D kernel can be 30× faster than the benchmark View Synthesis Reference Software (VSRS)-1D-Fast. Second, we propose a block-based multi-rate processing flow for accelerating purely infocused or defocused regions, and a further 3- 34× speedup can be achieved for high-resolution images. All candidate blocks of variable sizes can interpolate different numbers of rendered views and perform refocusing in different subsampled layers. To avoid visible aliasing and block artifacts, we determine these parameters and the simulated aperture filter through a localized filter response analysis using defocus blur statistics. The final quadtree block partitions are then optimized in terms of computation time. Extensive experimental results are provided to show superior refocusing quality and fast computation speed. In particular, the run time is comparable with the conventional single-image blurring, which causes serious boundary artifacts.

  3. Accelerating Fibre Orientation Estimation from Diffusion Weighted Magnetic Resonance Imaging Using GPUs

    PubMed Central

    Hernández, Moisés; Guerrero, Ginés D.; Cecilia, José M.; García, José M.; Inuggi, Alberto; Jbabdi, Saad; Behrens, Timothy E. J.; Sotiropoulos, Stamatios N.

    2013-01-01

    With the performance of central processing units (CPUs) having effectively reached a limit, parallel processing offers an alternative for applications with high computational demands. Modern graphics processing units (GPUs) are massively parallel processors that can execute simultaneously thousands of light-weight processes. In this study, we propose and implement a parallel GPU-based design of a popular method that is used for the analysis of brain magnetic resonance imaging (MRI). More specifically, we are concerned with a model-based approach for extracting tissue structural information from diffusion-weighted (DW) MRI data. DW-MRI offers, through tractography approaches, the only way to study brain structural connectivity, non-invasively and in-vivo. We parallelise the Bayesian inference framework for the ball & stick model, as it is implemented in the tractography toolbox of the popular FSL software package (University of Oxford). For our implementation, we utilise the Compute Unified Device Architecture (CUDA) programming model. We show that the parameter estimation, performed through Markov Chain Monte Carlo (MCMC), is accelerated by at least two orders of magnitude, when comparing a single GPU with the respective sequential single-core CPU version. We also illustrate similar speed-up factors (up to 120x) when comparing a multi-GPU with a multi-CPU implementation. PMID:23658616

  4. Surveillance and reconnaissance ground system architecture

    NASA Astrophysics Data System (ADS)

    Devambez, Francois

    2001-12-01

    Modern conflicts induces various modes of deployment, due to the type of conflict, the type of mission, and phase of conflict. It is then impossible to define fixed architecture systems for surveillance ground segments. Thales has developed a structure for a ground segment based on the operational functions required, and on the definition of modules and networks. Theses modules are software and hardware modules, including communications and networks. This ground segment is called MGS (Modular Ground Segment), and is intended for use in airborne reconnaissance systems, surveillance systems, and U.A.V. systems. Main parameters for the definition of a modular ground image exploitation system are : Compliance with various operational configurations, Easy adaptation to the evolution of theses configurations, Interoperability with NATO and multinational forces, Security, Multi-sensors, multi-platforms capabilities, Technical modularity, Evolutivity Reduction of life cycle cost The general performances of the MGS are presented : type of sensors, acquisition process, exploitation of images, report generation, data base management, dissemination, interface with C4I. The MGS is then described as a set of hardware and software modules, and their organization to build numerous operational configurations. Architectures are from minimal configuration intended for a mono-sensor image exploitation system, to a full image intelligence center, for a multilevel exploitation of multi-sensor.

  5. Function Biomedical Informatics Research Network Recommendations for Prospective Multi-Center Functional Magnetic Resonance Imaging Studies

    PubMed Central

    Glover, Gary H.; Mueller, Bryon A.; Turner, Jessica A.; van Erp, Theo G.M.; Liu, Thomas T.; Greve, Douglas N.; Voyvodic, James T.; Rasmussen, Jerod; Brown, Gregory G.; Keator, David B.; Calhoun, Vince D.; Lee, Hyo Jong; Ford, Judith M.; Mathalon, Daniel H.; Diaz, Michele; O’Leary, Daniel S.; Gadde, Syam; Preda, Adrian; Lim, Kelvin O.; Wible, Cynthia G.; Stern, Hal S.; Belger, Aysenil; McCarthy, Gregory; Ozyurt, Burak; Potkin, Steven G.

    2011-01-01

    This report provides practical recommendations for the design and execution of Multi-Center functional Magnetic Resonance Imaging (MC-fMRI) studies based on the collective experience of the Function Biomedical Informatics Research Network (FBIRN). The paper was inspired by many requests from the fMRI community to FBIRN group members for advice on how to conduct MC-fMRI studies. The introduction briefly discusses the advantages and complexities of MC-fMRI studies. Prerequisites for MC-fMRI studies are addressed before delving into the practical aspects of carefully and efficiently setting up a MC-fMRI study. Practical multi-site aspects include: (1) establishing and verifying scan parameters including scanner types and magnetic fields, (2) establishing and monitoring of a scanner quality program, (3) developing task paradigms and scan session documentation, (4) establishing clinical and scanner training to ensure consistency over time, (5) developing means for uploading, storing, and monitoring of imaging and other data, (6) the use of a traveling fMRI expert and (7) collectively analyzing imaging data and disseminating results. We conclude that when MC-fMRI studies are organized well with careful attention to unification of hardware, software and procedural aspects, the process can be a highly effective means for accessing a desired participant demographics while accelerating scientific discovery. PMID:22314879

  6. Skin condition measurement by using multispectral imaging system (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Jung, Geunho; Kim, Sungchul; Kim, Jae Gwan

    2017-02-01

    There are a number of commercially available low level light therapy (LLLT) devices in a market, and face whitening or wrinkle reduction is one of targets in LLLT. The facial improvement could be known simply by visual observation of face, but it cannot provide either quantitative data or recognize a subtle change. Clinical diagnostic instruments such as mexameter can provide a quantitative data, but it costs too high for home users. Therefore, we designed a low cost multi-spectral imaging device by adding additional LEDs (470nm, 640nm, white LED, 905nm) to a commercial USB microscope which has two LEDs (395nm, 940nm) as light sources. Among various LLLT skin treatments, we focused on getting melanin and wrinkle information. For melanin index measurements, multi-spectral images of nevus were acquired and melanin index values from color image (conventional method) and from multi-spectral images were compared. The results showed that multi-spectral analysis of melanin index can visualize nevus with a different depth and concentration. A cross section of wrinkle on skin resembles a wedge which can be a source of high frequency components when the skin image is Fourier transformed into a spatial frequency domain map. In that case, the entropy value of the spatial frequency map can represent the frequency distribution which is related with the amount and thickness of wrinkle. Entropy values from multi-spectral images can potentially separate the percentage of thin and shallow wrinkle from thick and deep wrinkle. From the results, we found that this low cost multi-spectral imaging system could be beneficial for home users of LLLT by providing the treatment efficacy in a quantitative way.

  7. Multi-viewer tracking integral imaging system and its viewing zone analysis.

    PubMed

    Park, Gilbae; Jung, Jae-Hyun; Hong, Keehoon; Kim, Yunhee; Kim, Young-Hoon; Min, Sung-Wook; Lee, Byoungho

    2009-09-28

    We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers' exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers' positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.

  8. Preliminary error budget analysis of the coronagraphic instrument metis for the solar orbiter ESA mission

    NASA Astrophysics Data System (ADS)

    Da Deppo, Vania; Poletto, Luca; Crescenzio, Giuseppe; Fineschi, Silvano; Antonucci, Ester; Naletto, Giampiero

    2017-11-01

    METIS, the Multi Element Telescope for Imaging and Spectroscopy, is the solar coronagraph foreseen for the ESA Solar Orbiter mission. METIS is conceived to image the solar corona from a near-Sun orbit in three different spectral bands: in the HeII EUV narrow band at 30.4 nm, in the HI UV narrow band at 121.6 nm, and in the polarized visible light band (590 - 650 nm). It also incorporates the capability of multi-slit spectroscopy of the corona in the UV/EUV range at different heliocentric heights. METIS is an externally occulted coronagraph which adopts an "inverted occulted" configuration. The Inverted external occulter (IEO) is a small circular aperture at the METIS entrance; the Sun-disk light is rejected by a spherical mirror M0 through the same aperture, while the coronal light is collected by two annular mirrors M1-M2 realizing a Gregorian telescope. To allocate the spectroscopic part, one portion of the M2 is covered by a grating (i.e. approximately 1/8 of the solar corona will not be imaged). This paper presents the error budget analysis for this new concept coronagraph configuration, which incorporates 3 different sub-channels: UV and EUV imaging sub-channel, in which the UV and EUV light paths have in common the detector and all of the optical elements but a filter, the polarimetric visible light sub-channel which, after the telescope optics, has a dedicated relay optics and a polarizing unit, and the spectroscopic sub-channel, which shares the filters and the detector with the UV-EUV imaging one, but includes a grating instead of the secondary mirror. The tolerance analysis of such an instrument is quite complex: in fact not only the optical performance for the 3 sub-channels has to be maintained simultaneously, but also the positions of M0 and of the occulters (IEO, internal occulter and Lyot stop), which guarantee the optimal disk light suppression, have to be taken into account as tolerancing parameters. In the aim of assuring the scientific requirements are optimally fulfilled for all the sub-channels, the preliminary results of manufacturing, alignment and stability tolerance analysis for the whole instrument will be described and discussed.

  9. Development of the algorithm of measurement data and tomographic section reconstruction results processing for evaluating the respiratory activity of the lungs using the multi-angle electric impedance tomography

    NASA Astrophysics Data System (ADS)

    Aleksanyan, Grayr; Shcherbakov, Ivan; Kucher, Artem; Sulyz, Andrew

    2018-04-01

    Continuous monitoring of the patient's breathing by the method of multi-angle electric impedance tomography allows to obtain images of conduction change in the chest cavity during the monitoring. Direct analysis of images is difficult due to the large amount of information and low resolution images obtained by multi-angle electrical impedance tomography. This work presents a method for obtaining a graph of respiratory activity of the lungs based on the results of continuous lung monitoring using the multi-angle electrical impedance tomography method. The method makes it possible to obtain a graph of the respiratory activity of the left and right lungs separately, as well as a summary graph, to which it is possible to apply methods of processing the results of spirography.

  10. Ceramic Electron Multiplier

    DOE PAGES

    Comby, G.

    1996-10-01

    The Ceramic Electron Multipliers (CEM) is a compact, robust, linear and fast multi-channel electron multiplier. The Multi Layer Ceramic Technique (MLCT) allows to build metallic dynodes inside a compact ceramic block. The activation of the metallic dynodes enhances their secondary electron emission (SEE). The CEM can be used in multi-channel photomultipliers, multi-channel light intensifiers, ion detection, spectroscopy, analysis of time of flight events, particle detection or Cherenkov imaging detectors. (auth)

  11. Quantification of vocal fold motion using echography: application to recurrent nerve paralysis detection

    NASA Astrophysics Data System (ADS)

    Cohen, Mike-Ely; Lefort, Muriel; Bergeret-Cassagne, Héloïse; Hachi, Siham; Li, Ang; Russ, Gilles; Lazard, Diane; Menegaux, Fabrice; Leenhardt, Laurence; Trésallet, Christophe; Frouin, Frédérique

    2015-03-01

    Recurrent nerve paralysis (RP) is one of the most frequent complications of thyroid surgery. It reduces vocal fold mobility. Nasal endoscopy, a mini-invasive procedure, is the conventional way to detect RP. We suggest a new approach based on laryngeal ultrasound and a specific data analysis was designed to help with the automated detection of RP. Ten subjects were enrolled for this feasibility study: four controls, three patients with RP and three patients without RP according to nasal endoscopy. The ultrasound protocol was based on a ten seconds B-mode acquisition in a coronal plane during normal breathing. Image processing included three steps: 1) automated detection of two consecutive closing and opening images, corresponding to extreme positions of vocal folds in the sequence of B-mode images, using principal component analysis of the image sequence; 2) positioning of three landmarks and robust tracking of these points using a multi-pyramidal refined optical flow approach; 3) estimation of quantitative parameters indicating left and right fractions of mobility, and motion symmetry. Results provided by automated image processing were compared to those obtained by an expert. Detection of extreme images was accurate; tracking of landmarks was reliable in 80% of cases. Motion symmetry indices showed similar values for controls and patients without RP. Fraction of mobility was reduced in cases of RP. Thus, our CAD system helped in the detection of RP. Laryngeal ultrasound combined with appropriate image processing helped in the diagnosis of recurrent nerve paralysis and could be proposed as a first-line method.

  12. FLIM data analysis of NADH and Tryptophan autofluorescence in prostate cancer cells

    NASA Astrophysics Data System (ADS)

    O'Melia, Meghan J.; Wallrabe, Horst; Svindrych, Zdenek; Rehman, Shagufta; Periasamy, Ammasi

    2016-03-01

    Fluorescence lifetime imaging microscopy (FLIM) is one of the most sensitive techniques to measure metabolic activity in living cells, tissues and whole animals. We used two- and three-photon fluorescence excitation together with time-correlated single photon counting (TCSPC) to acquire FLIM signals from normal and prostate cancer cell lines. FLIM requires complex data fitting and analysis; we explored different ways to analyze the data to match diverse cellular morphologies. After non-linear least square fitting of the multi-photon TCSPC images by the SPCImage software (Becker & Hickl), all image data are exported and further processed in ImageJ. Photon images provide morphological, NAD(P)H signal-based autofluorescent features, for which regions of interest (ROIs) are created. Applying these ROIs to all image data parameters with a custom ImageJ macro, generates a discrete, ROI specific database. A custom Excel (Microsoft) macro further analyzes the data with charts and statistics. Applying this highly automated assay we compared normal and cancer prostate cell lines with respect to their glycolytic activity by analyzing the NAD(P)H-bound fraction (a2%), NADPH/NADH ratio and efficiency of energy transfer (E%) for Tryptophan (Trp). Our results show that this assay is able to differentiate the effects of glucose stimulation and Doxorubicin in these prostate cell lines by tracking the changes in a2% of NAD(P)H, NADPH/NADH ratio and the changes in Trp E%. The ability to isolate a large, ROI-based data set, reflecting the heterogeneous cellular environment and highlighting even subtle changes -- rather than whole cell averages - makes this assay particularly valuable.

  13. Deblurring sequential ocular images from multi-spectral imaging (MSI) via mutual information.

    PubMed

    Lian, Jian; Zheng, Yuanjie; Jiao, Wanzhen; Yan, Fang; Zhao, Bojun

    2018-06-01

    Multi-spectral imaging (MSI) produces a sequence of spectral images to capture the inner structure of different species, which was recently introduced into ocular disease diagnosis. However, the quality of MSI images can be significantly degraded by motion blur caused by the inevitable saccades and exposure time required for maintaining a sufficiently high signal-to-noise ratio. This degradation may confuse an ophthalmologist, reduce the examination quality, or defeat various image analysis algorithms. We propose an early work specially on deblurring sequential MSI images, which is distinguished from many of the current image deblurring techniques by resolving the blur kernel simultaneously for all the images in an MSI sequence. It is accomplished by incorporating several a priori constraints including the sharpness of the latent clear image, the spatial and temporal smoothness of the blur kernel and the similarity between temporally-neighboring images in MSI sequence. Specifically, we model the similarity between MSI images with mutual information considering the different wavelengths used for capturing different images in MSI sequence. The optimization of the proposed approach is based on a multi-scale framework and stepwise optimization strategy. Experimental results from 22 MSI sequences validate that our approach outperforms several state-of-the-art techniques in natural image deblurring.

  14. Quantitative Assessment of Heart Rate Dynamics during Meditation: An ECG Based Study with Multi-Fractality and Visibility Graph

    PubMed Central

    Bhaduri, Anirban; Ghosh, Dipak

    2016-01-01

    The cardiac dynamics during meditation is explored quantitatively with two chaos-based non-linear techniques viz. multi-fractal detrended fluctuation analysis and visibility network analysis techniques. The data used are the instantaneous heart rate (in beats/minute) of subjects performing Kundalini Yoga and Chi meditation from PhysioNet. The results show consistent differences between the quantitative parameters obtained by both the analysis techniques. This indicates an interesting phenomenon of change in the complexity of the cardiac dynamics during meditation supported with quantitative parameters. The results also produce a preliminary evidence that these techniques can be used as a measure of physiological impact on subjects performing meditation. PMID:26909045

  15. Quantitative Assessment of Heart Rate Dynamics during Meditation: An ECG Based Study with Multi-Fractality and Visibility Graph.

    PubMed

    Bhaduri, Anirban; Ghosh, Dipak

    2016-01-01

    The cardiac dynamics during meditation is explored quantitatively with two chaos-based non-linear techniques viz. multi-fractal detrended fluctuation analysis and visibility network analysis techniques. The data used are the instantaneous heart rate (in beats/minute) of subjects performing Kundalini Yoga and Chi meditation from PhysioNet. The results show consistent differences between the quantitative parameters obtained by both the analysis techniques. This indicates an interesting phenomenon of change in the complexity of the cardiac dynamics during meditation supported with quantitative parameters. The results also produce a preliminary evidence that these techniques can be used as a measure of physiological impact on subjects performing meditation.

  16. Multi-Band Miniaturized Patch Antennas for a Compact, Shielded Microwave Breast Imaging Array.

    PubMed

    Aguilar, Suzette M; Al-Joumayly, Mudar A; Burfeindt, Matthew J; Behdad, Nader; Hagness, Susan C

    2013-12-18

    We present a comprehensive study of a class of multi-band miniaturized patch antennas designed for use in a 3D enclosed sensor array for microwave breast imaging. Miniaturization and multi-band operation are achieved by loading the antenna with non-radiating slots at strategic locations along the patch. This results in symmetric radiation patterns and similar radiation characteristics at all frequencies of operation. Prototypes were fabricated and tested in a biocompatible immersion medium. Excellent agreement was obtained between simulations and measurements. The trade-off between miniaturization and radiation efficiency within this class of patch antennas is explored via a numerical analysis of the effects of the location and number of slots, as well as the thickness and permittivity of the dielectric substrate, on the resonant frequencies and gain. Additionally, we compare 3D quantitative microwave breast imaging performance achieved with two different enclosed arrays of slot-loaded miniaturized patch antennas. Simulated array measurements were obtained for a 3D anatomically realistic numerical breast phantom. The reconstructed breast images generated from miniaturized patch array data suggest that, for the realistic noise power levels assumed in this study, the variations in gain observed across this class of multi-band patch antennas do not significantly impact the overall image quality. We conclude that these miniaturized antennas are promising candidates as compact array elements for shielded, multi-frequency microwave breast imaging systems.

  17. Application of Deep Learning of Multi-Temporal SENTINEL-1 Images for the Classification of Coastal Vegetation Zone of the Danube Delta

    NASA Astrophysics Data System (ADS)

    Niculescu, S.; Ienco, D.; Hanganu, J.

    2018-04-01

    Land cover is a fundamental variable for regional planning, as well as for the study and understanding of the environment. This work propose a multi-temporal approach relying on a fusion of radar multi-sensor data and information collected by the latest sensor (Sentinel-1) with a view to obtaining better results than traditional image processing techniques. The Danube Delta is the site for this work. The spatial approach relies on new spatial analysis technologies and methodologies: Deep Learning of multi-temporal Sentinel-1. We propose a deep learning network for image classification which exploits the multi-temporal characteristic of Sentinel-1 data. The model we employ is a Gated Recurrent Unit (GRU) Network, a recurrent neural network that explicitly takes into account the time dimension via a gated mechanism to perform the final prediction. The main quality of the GRU network is its ability to consider only the important part of the information coming from the temporal data discarding the irrelevant information via a forgetting mechanism. We propose to use such network structure to classify a series of images Sentinel-1 (20 Sentinel-1 images acquired between 9.10.2014 and 01.04.2016). The results are compared with results of the classification of Random Forest.

  18. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  19. One registration multi-atlas-based pseudo-CT generation for attenuation correction in PET/MRI.

    PubMed

    Arabi, Hossein; Zaidi, Habib

    2016-10-01

    The outcome of a detailed assessment of various strategies for atlas-based whole-body bone segmentation from magnetic resonance imaging (MRI) was exploited to select the optimal parameters and setting, with the aim of proposing a novel one-registration multi-atlas (ORMA) pseudo-CT generation approach. The proposed approach consists of only one online registration between the target and reference images, regardless of the number of atlas images (N), while for the remaining atlas images, the pre-computed transformation matrices to the reference image are used to align them to the target image. The performance characteristics of the proposed method were evaluated and compared with conventional atlas-based attenuation map generation strategies (direct registration of the entire atlas images followed by voxel-wise weighting (VWW) and arithmetic averaging atlas fusion). To this end, four different positron emission tomography (PET) attenuation maps were generated via arithmetic averaging and VWW scheme using both direct registration and ORMA approaches as well as the 3-class attenuation map obtained from the Philips Ingenuity TF PET/MRI scanner commonly used in the clinical setting. The evaluation was performed based on the accuracy of extracted whole-body bones by the different attenuation maps and by quantitative analysis of resulting PET images compared to CT-based attenuation-corrected PET images serving as reference. The comparison of validation metrics regarding the accuracy of extracted bone using the different techniques demonstrated the superiority of the VWW atlas fusion algorithm achieving a Dice similarity measure of 0.82 ± 0.04 compared to arithmetic averaging atlas fusion (0.60 ± 0.02), which uses conventional direct registration. Application of the ORMA approach modestly compromised the accuracy, yielding a Dice similarity measure of 0.76 ± 0.05 for ORMA-VWW and 0.55 ± 0.03 for ORMA-averaging. The results of quantitative PET analysis followed the same trend with less significant differences in terms of SUV bias, whereas massive improvements were observed compared to PET images corrected for attenuation using the 3-class attenuation map. The maximum absolute bias achieved by VWW and VWW-ORMA methods was 06.4 ± 5.5 in the lung and 07.9 ± 4.8 in the bone, respectively. The proposed algorithm is capable of generating decent attenuation maps. The quantitative analysis revealed a good correlation between PET images corrected for attenuation using the proposed pseudo-CT generation approach and the corresponding CT images. The computational time is reduced by a factor of 1/N at the expense of a modest decrease in quantitative accuracy, thus allowing us to achieve a reasonable compromise between computing time and quantitative performance.

  20. Quantification of MagLIF stagnation morphology using the Mallat Scattering Transformation

    NASA Astrophysics Data System (ADS)

    Glinsky, Michael; Weis, Matthew; Jennings, Christopher; Ampleford, David; Harding, Eric; Knapp, Patrick; Gomez, Matthew

    2017-10-01

    The morphology of the stagnated plasma resulting from MagLIF is measured by imaging the self-emission x-rays coming from the multi-keV plasma. Equivalent diagnostic response can be derived from integrated rad-hydro simulations from programs such as Hydra and Gorgon. There have been only limited quantitative ways to compare the image morphology, that is the texture, of the simulations to that of the experiments, to compare one experiment to another, or to compare one simulation to another. We have developed a metric of image morphology based on the Mallat Scattering Transformation, a transformation that has proved to be effective at distinguishing textures, sounds, and written characters. This metric has demonstrated excellent performance in classifying an ensemble of synthetic stagnations images. A good regression of the scattering coefficients to the parameters used to generate the synthetic images was found. Finally, the metric has been used to quantitatively compare simulations to experimental self-emission images. Sandia National Laboratories is a multi-mission laboratory managed and operated by NTESS, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the USDoEs NNSA under contract DE-NA0003525.

  1. Multi-task transfer learning deep convolutional neural network: application to computer-aided diagnosis of breast cancer on mammograms

    NASA Astrophysics Data System (ADS)

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Helvie, Mark A.; Cha, Kenny H.; Richter, Caleb D.

    2017-12-01

    Transfer learning in deep convolutional neural networks (DCNNs) is an important step in its application to medical imaging tasks. We propose a multi-task transfer learning DCNN with the aim of translating the ‘knowledge’ learned from non-medical images to medical diagnostic tasks through supervised training and increasing the generalization capabilities of DCNNs by simultaneously learning auxiliary tasks. We studied this approach in an important application: classification of malignant and benign breast masses. With Institutional Review Board (IRB) approval, digitized screen-film mammograms (SFMs) and digital mammograms (DMs) were collected from our patient files and additional SFMs were obtained from the Digital Database for Screening Mammography. The data set consisted of 2242 views with 2454 masses (1057 malignant, 1397 benign). In single-task transfer learning, the DCNN was trained and tested on SFMs. In multi-task transfer learning, SFMs and DMs were used to train the DCNN, which was then tested on SFMs. N-fold cross-validation with the training set was used for training and parameter optimization. On the independent test set, the multi-task transfer learning DCNN was found to have significantly (p  =  0.007) higher performance compared to the single-task transfer learning DCNN. This study demonstrates that multi-task transfer learning may be an effective approach for training DCNN in medical imaging applications when training samples from a single modality are limited.

  2. Modeling Habitat Suitability of Migratory Birds from Remote Sensing Images Using Convolutional Neural Networks

    PubMed Central

    Su, Jin-He; Piao, Ying-Chao; Luo, Ze; Yan, Bao-Ping

    2018-01-01

    Simple Summary The understanding of the spatio-temporal distribution of the species habitats would facilitate wildlife resource management and conservation efforts. Existing methods have poor performance due to the limited availability of training samples. More recently, location-aware sensors have been widely used to track animal movements. The aim of the study was to generate suitability maps of bar-head geese using movement data coupled with environmental parameters, such as remote sensing images and temperature data. Therefore, we modified a deep convolutional neural network for the multi-scale inputs. The results indicate that the proposed method can identify the areas with the dense goose species around Qinghai Lake. In addition, this approach might also be interesting for implementation in other species with different niche factors or in areas where biological survey data are scarce. Abstract With the application of various data acquisition devices, a large number of animal movement data can be used to label presence data in remote sensing images and predict species distribution. In this paper, a two-stage classification approach for combining movement data and moderate-resolution remote sensing images was proposed. First, we introduced a new density-based clustering method to identify stopovers from migratory birds’ movement data and generated classification samples based on the clustering result. We split the remote sensing images into 16 × 16 patches and labeled them as positive samples if they have overlap with stopovers. Second, a multi-convolution neural network model is proposed for extracting the features from temperature data and remote sensing images, respectively. Then a Support Vector Machines (SVM) model was used to combine the features together and predict classification results eventually. The experimental analysis was carried out on public Landsat 5 TM images and a GPS dataset was collected on 29 birds over three years. The results indicated that our proposed method outperforms the existing baseline methods and was able to achieve good performance in habitat suitability prediction. PMID:29701686

  3. Spectroscopy as a tool for geochemical modeling

    NASA Astrophysics Data System (ADS)

    Kopacková, Veronika; Chevrel, Stephane; Bourguignon, Anna

    2011-11-01

    This study focused on testing the feasibility of up-scaling ground-spectra-derived parameters to HyMap spectral and spatial resolution and whether they could be further used for a quantitative determination of the following geochemical parameters: As, pH and Clignite content. The study was carried on the Sokolov lignite mine as it represents a site with extreme material heterogeneity and high heavy-metal gradients. A new segmentation method based on the unique spectral properties of acid materials was developed and applied to the multi-line HyMap image data corrected for BRDF and atmospheric effects. The quantitative parameters were calculated for multiple absorption features identified within the VIS/VNIR/SWIR regions (simple band ratios, absorption band depth and quantitative spectral feature parameters calculated dynamically for each spectral measurement (centre of the absorption band (λ), depth of the absorption band (D), width of the absorption band (Width), and asymmetry of the absorption band (S)). The degree of spectral similarity between the ground and image spectra was assessed. The linear models for pH, As and the Clignite content of the whole and segmented images were cross-validated on the selected homogenous areas defined in the HS images using ground truth. For the segmented images, reliable results were achieved as follows: As: R2=0.84, Clignite: R2=0.88 and R2 pH: R2= 0.57.

  4. HCS-Neurons: identifying phenotypic changes in multi-neuron images upon drug treatments of high-content screening.

    PubMed

    Charoenkwan, Phasit; Hwang, Eric; Cutler, Robert W; Lee, Hua-Chin; Ko, Li-Wei; Huang, Hui-Ling; Ho, Shinn-Ying

    2013-01-01

    High-content screening (HCS) has become a powerful tool for drug discovery. However, the discovery of drugs targeting neurons is still hampered by the inability to accurately identify and quantify the phenotypic changes of multiple neurons in a single image (named multi-neuron image) of a high-content screen. Therefore, it is desirable to develop an automated image analysis method for analyzing multi-neuron images. We propose an automated analysis method with novel descriptors of neuromorphology features for analyzing HCS-based multi-neuron images, called HCS-neurons. To observe multiple phenotypic changes of neurons, we propose two kinds of descriptors which are neuron feature descriptor (NFD) of 13 neuromorphology features, e.g., neurite length, and generic feature descriptors (GFDs), e.g., Haralick texture. HCS-neurons can 1) automatically extract all quantitative phenotype features in both NFD and GFDs, 2) identify statistically significant phenotypic changes upon drug treatments using ANOVA and regression analysis, and 3) generate an accurate classifier to group neurons treated by different drug concentrations using support vector machine and an intelligent feature selection method. To evaluate HCS-neurons, we treated P19 neurons with nocodazole (a microtubule depolymerizing drug which has been shown to impair neurite development) at six concentrations ranging from 0 to 1000 ng/mL. The experimental results show that all the 13 features of NFD have statistically significant difference with respect to changes in various levels of nocodazole drug concentrations (NDC) and the phenotypic changes of neurites were consistent to the known effect of nocodazole in promoting neurite retraction. Three identified features, total neurite length, average neurite length, and average neurite area were able to achieve an independent test accuracy of 90.28% for the six-dosage classification problem. This NFD module and neuron image datasets are provided as a freely downloadable MatLab project at http://iclab.life.nctu.edu.tw/HCS-Neurons. Few automatic methods focus on analyzing multi-neuron images collected from HCS used in drug discovery. We provided an automatic HCS-based method for generating accurate classifiers to classify neurons based on their phenotypic changes upon drug treatments. The proposed HCS-neurons method is helpful in identifying and classifying chemical or biological molecules that alter the morphology of a group of neurons in HCS.

  5. Change detection in satellite images

    NASA Astrophysics Data System (ADS)

    Thonnessen, U.; Hofele, G.; Middelmann, W.

    2005-05-01

    Change detection plays an important role in different military areas as strategic reconnaissance, verification of armament and disarmament control and damage assessment. It is the process of identifying differences in the state of an object or phenomenon by observing it at different times. The availability of spaceborne reconnaissance systems with high spatial resolution, multi spectral capabilities, and short revisit times offer new perspectives for change detection. Before performing any kind of change detection it is necessary to separate changes of interest from changes caused by differences in data acquisition parameters. In these cases it is necessary to perform a pre-processing to correct the data or to normalize it. Image registration and, corresponding to this task, the ortho-rectification of the image data is a further prerequisite for change detection. If feasible, a 1-to-1 geometric correspondence should be aspired for. Change detection on an iconic level with a succeeding interpretation of the changes by the observer is often proposed; nevertheless an automatic knowledge-based analysis delivering the interpretation of the changes on a semantic level should be the aim of the future. We present first results of change detection on a structural level concerning urban areas. After pre-processing, the images are segmented in areas of interest and structural analysis is applied to these regions to extract descriptions of urban infrastructure like buildings, roads and tanks of refineries. These descriptions are matched to detect changes and similarities.

  6. Multi-institutional MicroCT image comparison of image-guided small animal irradiators

    NASA Astrophysics Data System (ADS)

    Johnstone, Chris D.; Lindsay, Patricia; E Graves, Edward; Wong, Eugene; Perez, Jessica R.; Poirier, Yannick; Ben-Bouchta, Youssef; Kanesalingam, Thilakshan; Chen, Haijian; E Rubinstein, Ashley; Sheng, Ke; Bazalova-Carter, Magdalena

    2017-07-01

    To recommend imaging protocols and establish tolerance levels for microCT image quality assurance (QA) performed on conformal image-guided small animal irradiators. A fully automated QA software SAPA (small animal phantom analyzer) for image analysis of the commercial Shelley micro-CT MCTP 610 phantom was developed, in which quantitative analyses of CT number linearity, signal-to-noise ratio (SNR), uniformity and noise, geometric accuracy, spatial resolution by means of modulation transfer function (MTF), and CT contrast were performed. Phantom microCT scans from eleven institutions acquired with four image-guided small animal irradiator units (including the commercial PXi X-RAD SmART and Xstrahl SARRP systems) with varying parameters used for routine small animal imaging were analyzed. Multi-institutional data sets were compared using SAPA, based on which tolerance levels for each QA test were established and imaging protocols for QA were recommended. By analyzing microCT data from 11 institutions, we established image QA tolerance levels for all image quality tests. CT number linearity set to R 2  >  0.990 was acceptable in microCT data acquired at all but three institutions. Acceptable SNR  >  36 and noise levels  <55 HU were obtained at five of the eleven institutions, where failing scans were acquired with current-exposure time of less than 120 mAs. Acceptable spatial resolution (>1.5 lp mm-1 for MTF  =  0.2) was obtained at all but four institutions due to their large image voxel size used (>0.275 mm). Ten of the eleven institutions passed the set QA tolerance for geometric accuracy (<1.5%) and nine of the eleven institutions passed the QA tolerance for contrast (>2000 HU for 30 mgI ml-1). We recommend performing imaging QA with 70 kVp, 1.5 mA, 120 s imaging time, 0.20 mm voxel size, and a frame rate of 5 fps for the PXi X-RAD SmART. For the Xstrahl SARRP, we recommend using 60 kVp, 1.0 mA, 240 s imaging time, 0.20 mm voxel size, and 6 fps. These imaging protocols should result in high quality images that pass the set tolerance levels on all systems. Average SAPA computation time for complete QA analysis for a 0.20 mm voxel, 400 slice Shelley phantom microCT data set was less than 20 s. We present image quality assurance recommendations for image-guided small animal radiotherapy systems that can aid researchers in maintaining high image quality, allowing for spatially precise conformal dose delivery to small animals.

  7. The physical and biological basis of quantitative parameters derived from diffusion MRI

    PubMed Central

    2012-01-01

    Diffusion magnetic resonance imaging is a quantitative imaging technique that measures the underlying molecular diffusion of protons. Diffusion-weighted imaging (DWI) quantifies the apparent diffusion coefficient (ADC) which was first used to detect early ischemic stroke. However this does not take account of the directional dependence of diffusion seen in biological systems (anisotropy). Diffusion tensor imaging (DTI) provides a mathematical model of diffusion anisotropy and is widely used. Parameters, including fractional anisotropy (FA), mean diffusivity (MD), parallel and perpendicular diffusivity can be derived to provide sensitive, but non-specific, measures of altered tissue structure. They are typically assessed in clinical studies by voxel-based or region-of-interest based analyses. The increasing recognition of the limitations of the diffusion tensor model has led to more complex multi-compartment models such as CHARMED, AxCaliber or NODDI being developed to estimate microstructural parameters including axonal diameter, axonal density and fiber orientations. However these are not yet in routine clinical use due to lengthy acquisition times. In this review, I discuss how molecular diffusion may be measured using diffusion MRI, the biological and physical bases for the parameters derived from DWI and DTI, how these are used in clinical studies and the prospect of more complex tissue models providing helpful micro-structural information. PMID:23289085

  8. Spatial and radiometric characterization of multi-spectrum satellite images through multi-fractal analysis

    NASA Astrophysics Data System (ADS)

    Alonso, Carmelo; Tarquis, Ana M.; Zúñiga, Ignacio; Benito, Rosa M.

    2017-03-01

    Several studies have shown that vegetation indexes can be used to estimate root zone soil moisture. Earth surface images, obtained by high-resolution satellites, presently give a lot of information on these indexes, based on the data of several wavelengths. Because of the potential capacity for systematic observations at various scales, remote sensing technology extends the possible data archives from the present time to several decades back. Because of this advantage, enormous efforts have been made by researchers and application specialists to delineate vegetation indexes from local scale to global scale by applying remote sensing imagery. In this work, four band images have been considered, which are involved in these vegetation indexes, and were taken by satellites Ikonos-2 and Landsat-7 of the same geographic location, to study the effect of both spatial (pixel size) and radiometric (number of bits coding the image) resolution on these wavelength bands as well as two vegetation indexes: the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI). In order to do so, a multi-fractal analysis of these multi-spectral images was applied in each of these bands and the two indexes derived. The results showed that spatial resolution has a similar scaling effect in the four bands, but radiometric resolution has a larger influence in blue and green bands than in red and near-infrared bands. The NDVI showed a higher sensitivity to the radiometric resolution than EVI. Both were equally affected by the spatial resolution. From both factors, the spatial resolution has a major impact in the multi-fractal spectrum for all the bands and the vegetation indexes. This information should be taken in to account when vegetation indexes based on different satellite sensors are obtained.

  9. A component-based system for agricultural drought monitoring by remote sensing.

    PubMed

    Dong, Heng; Li, Jun; Yuan, Yanbin; You, Lin; Chen, Chao

    2017-01-01

    In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China's Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring.

  10. A component-based system for agricultural drought monitoring by remote sensing

    PubMed Central

    Yuan, Yanbin; You, Lin; Chen, Chao

    2017-01-01

    In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China’s Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring. PMID:29236700

  11. Real-Time Three-Dimensional Cell Segmentation in Large-Scale Microscopy Data of Developing Embryos.

    PubMed

    Stegmaier, Johannes; Amat, Fernando; Lemon, William C; McDole, Katie; Wan, Yinan; Teodoro, George; Mikut, Ralf; Keller, Philipp J

    2016-01-25

    We present the Real-time Accurate Cell-shape Extractor (RACE), a high-throughput image analysis framework for automated three-dimensional cell segmentation in large-scale images. RACE is 55-330 times faster and 2-5 times more accurate than state-of-the-art methods. We demonstrate the generality of RACE by extracting cell-shape information from entire Drosophila, zebrafish, and mouse embryos imaged with confocal and light-sheet microscopes. Using RACE, we automatically reconstructed cellular-resolution tissue anisotropy maps across developing Drosophila embryos and quantified differences in cell-shape dynamics in wild-type and mutant embryos. We furthermore integrated RACE with our framework for automated cell lineaging and performed joint segmentation and cell tracking in entire Drosophila embryos. RACE processed these terabyte-sized datasets on a single computer within 1.4 days. RACE is easy to use, as it requires adjustment of only three parameters, takes full advantage of state-of-the-art multi-core processors and graphics cards, and is available as open-source software for Windows, Linux, and Mac OS. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Serial robot for the trajectory optimization and error compensation of TMT mask exchange system

    NASA Astrophysics Data System (ADS)

    Wang, Jianping; Zhang, Feifan; Zhou, Zengxiang; Zhai, Chao

    2015-10-01

    Mask exchange system is the main part of Multi-Object Broadband Imaging Echellette (MOBIE) on the Thirty Meter Telescope (TMT). According to the conception of the TMT mask exchange system, the pre-design was introduced in the paper which was based on IRB 140 robot. The stiffness model of IRB 140 in SolidWorks was analyzed under different gravity vectors for further error compensation. In order to find the right location and path planning, the robot and the mask cassette model was imported into MOBIE model to perform different schemes simulation. And obtained the initial installation position and routing. Based on these initial parameters, IRB 140 robot was operated to simulate the path and estimate the mask exchange time. Meanwhile, MATLAB and ADAMS software were used to perform simulation analysis and optimize the route to acquire the kinematics parameters and compare with the experiment results. After simulation and experimental research mentioned in the paper, the theoretical reference was acquired which could high efficient improve the structure of the mask exchange system parameters optimization of the path and precision of the robot position.

  13. Material classification and automatic content enrichment of images using supervised learning and knowledge bases

    NASA Astrophysics Data System (ADS)

    Mallepudi, Sri Abhishikth; Calix, Ricardo A.; Knapp, Gerald M.

    2011-02-01

    In recent years there has been a rapid increase in the size of video and image databases. Effective searching and retrieving of images from these databases is a significant current research area. In particular, there is a growing interest in query capabilities based on semantic image features such as objects, locations, and materials, known as content-based image retrieval. This study investigated mechanisms for identifying materials present in an image. These capabilities provide additional information impacting conditional probabilities about images (e.g. objects made of steel are more likely to be buildings). These capabilities are useful in Building Information Modeling (BIM) and in automatic enrichment of images. I2T methodologies are a way to enrich an image by generating text descriptions based on image analysis. In this work, a learning model is trained to detect certain materials in images. To train the model, an image dataset was constructed containing single material images of bricks, cloth, grass, sand, stones, and wood. For generalization purposes, an additional set of 50 images containing multiple materials (some not used in training) was constructed. Two different supervised learning classification models were investigated: a single multi-class SVM classifier, and multiple binary SVM classifiers (one per material). Image features included Gabor filter parameters for texture, and color histogram data for RGB components. All classification accuracy scores using the SVM-based method were above 85%. The second model helped in gathering more information from the images since it assigned multiple classes to the images. A framework for the I2T methodology is presented.

  14. MO-G-17A-02: Computer Simulation Studies for On-Board Functional and Molecular Imaging of the Prostate Using a Robotic Multi-Pinhole SPECT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, L; Duke University Medical Center, Durham, NC; Fudan University Shanghai Cancer Center, Shanghai

    Purpose: To investigate prostate imaging onboard radiation therapy machines using a novel robotic, 49-pinhole Single Photon Emission Computed Tomography (SPECT) system. Methods: Computer-simulation studies were performed for region-of-interest (ROI) imaging using a 49-pinhole SPECT collimator and for broad cross-section imaging using a parallel-hole SPECT collimator. A male XCAT phantom was computersimulated in supine position with one 12mm-diameter tumor added in the prostate. A treatment couch was added to the phantom. Four-minute detector trajectories for imaging a 7cm-diameter-sphere ROI encompassing the tumor were investigated with different parameters, including pinhole focal length, pinhole diameter and trajectory starting angle. Pseudo-random Poisson noise wasmore » included in the simulated projection data, and SPECT images were reconstructed by OSEM with 4 subsets and up to 10 iterations. Images were evaluated by visual inspection, profiles, and Root-Mean- Square-Error (RMSE). Results: The tumor was well visualized above background by the 49-pinhole SPECT system with different pinhole parameters while it was not visible with parallel-hole SPECT imaging. Minimum RMSEs were 0.30 for 49-pinhole imaging and 0.41 for parallelhole imaging. For parallel-hole imaging, the detector trajectory from rightto- left yielded slightly lower RMSEs than that from posterior to anterior. For 49-pinhole imaging, near-minimum RMSEs were maintained over a broader range of OSEM iterations with a 5mm pinhole diameter and 21cm focal length versus a 2mm diameter pinhole and 18cm focal length. The detector with 21cm pinhole focal length had the shortest rotation radius averaged over the trajectory. Conclusion: On-board functional and molecular prostate imaging may be feasible in 4-minute scan times by robotic SPECT. A 49-pinhole SPECT system could improve such imaging as compared to broadcross-section parallel-hole collimated SPECT imaging. Multi-pinhole imaging can be improved by considering pinhole focal length, pinhole diameter, and trajectory starting angle. The project is supported by the NIH grant 5R21-CA156390.« less

  15. Comparative Study of Speckle Filtering Methods in PolSAR Radar Images

    NASA Astrophysics Data System (ADS)

    Boutarfa, S.; Bouchemakh, L.; Smara, Y.

    2015-04-01

    Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; University of Missouri, Columbia, MO; Chen, H

    Purpose: Local noise power spectrum (NPS) properties are significantly affected by calculation variables and CT acquisition and reconstruction parameters, but a thoughtful analysis of these effects is absent. In this study, we performed a complete analysis of the effects of calculation and imaging parameters on the NPS. Methods: The uniformity module of a Catphan phantom was scanned with a Philips Brilliance 64-slice CT simulator using various scanning protocols. Images were reconstructed using both FBP and iDose4 reconstruction algorithms. From these images, local NPS were calculated for regions of interest (ROI) of varying locations and sizes, using four image background removalmore » methods. Additionally, using a predetermined ground truth, NPS calculation accuracy for various calculation parameters was compared for computer simulated ROIs. A complete analysis of the effects of calculation, acquisition, and reconstruction parameters on the NPS was conducted. Results: The local NPS varied with ROI size and image background removal method, particularly at low spatial frequencies. The image subtraction method was the most accurate according to the computer simulation study, and was also the most effective at removing low frequency background components in the acquired data. However, first-order polynomial fitting using residual sum of squares and principle component analysis provided comparable accuracy under certain situations. Similar general trends were observed when comparing the NPS for FBP to that of iDose4 while varying other calculation and scanning parameters. However, while iDose4 reduces the noise magnitude compared to FBP, this reduction is spatial-frequency dependent, further affecting NPS variations at low spatial frequencies. Conclusion: The local NPS varies significantly depending on calculation parameters, image acquisition parameters, and reconstruction techniques. Appropriate local NPS calculation should be performed to capture spatial variations of noise; calculation methodology should be selected with consideration of image reconstruction effects and the desired purpose of CT simulation for radiotherapy tasks.« less

  17. GPR image analysis to locate water leaks from buried pipes by applying variance filters

    NASA Astrophysics Data System (ADS)

    Ocaña-Levario, Silvia J.; Carreño-Alvarado, Elizabeth P.; Ayala-Cabrera, David; Izquierdo, Joaquín

    2018-05-01

    Nowadays, there is growing interest in controlling and reducing the amount of water lost through leakage in water supply systems (WSSs). Leakage is, in fact, one of the biggest problems faced by the managers of these utilities. This work addresses the problem of leakage in WSSs by using GPR (Ground Penetrating Radar) as a non-destructive method. The main objective is to identify and extract features from GPR images such as leaks and components in a controlled laboratory condition by a methodology based on second order statistical parameters and, using the obtained features, to create 3D models that allows quick visualization of components and leaks in WSSs from GPR image analysis and subsequent interpretation. This methodology has been used before in other fields and provided promising results. The results obtained with the proposed methodology are presented, analyzed, interpreted and compared with the results obtained by using a well-established multi-agent based methodology. These results show that the variance filter is capable of highlighting the characteristics of components and anomalies, in an intuitive manner, which can be identified by non-highly qualified personnel, using the 3D models we develop. This research intends to pave the way towards future intelligent detection systems that enable the automatic detection of leaks in WSSs.

  18. Automatic Cell Segmentation in Fluorescence Images of Confluent Cell Monolayers Using Multi-object Geometric Deformable Model.

    PubMed

    Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L

    2013-03-13

    With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.

  19. Application of a Simplified Method for Estimating Perfusion Derived from Diffusion-Weighted MR Imaging in Glioma Grading.

    PubMed

    Cao, Mengqiu; Suo, Shiteng; Han, Xu; Jin, Ke; Sun, Yawen; Wang, Yao; Ding, Weina; Qu, Jianxun; Zhang, Xiaohua; Zhou, Yan

    2017-01-01

    Purpose : To evaluate the feasibility of a simplified method based on diffusion-weighted imaging (DWI) acquired with three b -values to measure tissue perfusion linked to microcirculation, to validate it against from perfusion-related parameters derived from intravoxel incoherent motion (IVIM) and dynamic contrast-enhanced (DCE) magnetic resonance (MR) imaging, and to investigate its utility to differentiate low- from high-grade gliomas. Materials and Methods : The prospective study was approved by the local institutional review board and written informed consent was obtained from all patients. From May 2016 and May 2017, 50 patients confirmed with glioma were assessed with multi- b -value DWI and DCE MR imaging at 3.0 T. Besides conventional apparent diffusion coefficient (ADC 0,1000 ) map, perfusion-related parametric maps for IVIM-derived perfusion fraction ( f ) and pseudodiffusion coefficient (D*), DCE MR imaging-derived pharmacokinetic metrics, including K trans , v e and v p , as well as a metric named simplified perfusion fraction (SPF), were generated. Correlation between perfusion-related parameters was analyzed by using the Spearman rank correlation. All imaging parameters were compared between the low-grade ( n = 19) and high-grade ( n = 31) groups by using the Mann-Whitney U test. The diagnostic performance for tumor grading was evaluated with receiver operating characteristic (ROC) analysis. Results : SPF showed strong correlation with IVIM-derived f and D* ( ρ = 0.732 and 0.716, respectively; both P < 0.001). Compared with f , SPF was more correlated with DCE MR imaging-derived K trans ( ρ = 0.607; P < 0.001) and v p ( ρ = 0.397; P = 0.004). Among all parameters, SPF achieved the highest accuracy for differentiating low- from high-grade gliomas, with an area under the ROC curve value of 0.942, which was significantly higher than that of ADC 0,1000 ( P = 0.004). By using SPF as a discriminative index, the diagnostic sensitivity and specificity were 87.1% and 94.7%, respectively, at the optimal cut-off value of 19.26%. Conclusion : The simplified method to measure tissue perfusion based on DWI by using three b -values may be helpful to differentiate low- from high-grade gliomas. SPF may serve as a valuable alternative to measure tumor perfusion in gliomas in a noninvasive, convenient and efficient way.

  20. Navigating the fifth dimension: new concepts in interactive multimodality and multidimensional image navigation

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Dahlbom, Magnus; Czernin, Johannes

    2005-04-01

    Display and interpretation of multi dimensional data obtained from the combination of 3D data acquired from different modalities (such as PET-CT) require complex software tools allowing the user to navigate and modify the different image parameters. With faster scanners it is now possible to acquire dynamic images of a beating heart or the transit of a contrast agent adding a fifth dimension to the data. We developed a DICOM-compliant software for real time navigation in very large sets of 5 dimensional data based on an intuitive multidimensional jog-wheel widely used by the video-editing industry. The software, provided under open source licensing, allows interactive, single-handed, navigation through 3D images while adjusting blending of image modalities, image contrast and intensity and the rate of cine display of dynamic images. In this study we focused our effort on the user interface and means for interactively navigating in these large data sets while easily and rapidly changing multiple parameters such as image position, contrast, intensity, blending of colors, magnification etc. Conventional mouse-driven user interface requiring the user to manipulate cursors and sliders on the screen are too cumbersome and slow. We evaluated several hardware devices and identified a category of multipurpose jogwheel device that is used in the video-editing industry that is particularly suitable for rapidly navigating in five dimensions while adjusting several display parameters interactively. The application of this tool will be demonstrated in cardiac PET-CT imaging and functional cardiac MRI studies.

  1. Multi-Response Optimization of WEDM Process Parameters Using Taguchi Based Desirability Function Analysis

    NASA Astrophysics Data System (ADS)

    Majumder, Himadri; Maity, Kalipada

    2018-03-01

    Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.

  2. Sentinel 2B: the image quality performances at the beginning of the mission

    NASA Astrophysics Data System (ADS)

    Trémas, T.; Lonjou, V.; Dick, A.; Languille, F.; Gaudel-Vacaresse, A.; Vidal, B.; Revel, C.

    2017-09-01

    Launched on March 6th, 2017 from Kourou, Sentinel 2B has passed the phase of commissioning. Sentinel 2B will work together with Sentinel 2A launched in June 2015. The building and implementation of the satellite has been made under the responsibility of ESA, for the European Commission. The subset of Image Quality commissioning was delegated by ESA to CNES, referring to the experience of the French Space Agency on previous imagers. This phase lasted 4 months after the launch, a little longer than the formal In Orbit Calibration period conducted by ESA, some Image Quality parameters requiring several months before converging to a stable state. This paper presents the status of the satellite, from an IQ prospective, just before it entered its operational phase. The radiometric and geometric performances are listed, including: the absolute radiometric calibration, the equalization, the SNR, the absolute and the multi-temporal location accuracy. The performances of both satellites Sentinel and Sentinel 2B working together, will be addressed. A particular focus will be done on multi-temporal location performances, homogeneity of radiometric inter calibrations. The accomplishment of the Global Reference Image over Europe is evoked as well. The IQ commissioning phase ended on June 2017. From this date, the monitoring of IQ parameters is under the responsibility of ESA/ESRIN. Nevertheless, CNES continues to support ESA to survey the accuracy of S2A and S2B performances. The article ends by dealing with the prospective offered by the couple Sentinel 2A + Sentinel 2B.

  3. A novel mesh processing based technique for 3D plant analysis

    PubMed Central

    2012-01-01

    Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features. PMID:22553969

  4. Sensitivity analysis of multi-objective optimization of CPG parameters for quadruped robot locomotion

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina P.; Costa, Lino

    2012-09-01

    In this paper, a study based on sensitivity analysis is performed for a gait multi-objective optimization system that combines bio-inspired Central Patterns Generators (CPGs) and a multi-objective evolutionary algorithm based on NSGA-II. In this system, CPGs are modeled as autonomous differential equations, that generate the necessary limb movement to perform the required walking gait. In order to optimize the walking gait, a multi-objective problem with three conflicting objectives is formulated: maximization of the velocity, the wide stability margin and the behavioral diversity. The experimental results highlight the effectiveness of this multi-objective approach and the importance of the objectives to find different walking gait solutions for the quadruped robot.

  5. Bleed-through correction for rendering and correlation analysis in multi-colour localization microscopy

    PubMed Central

    Kim, Dahan; Curthoys, Nikki M.; Parent, Matthew T.; Hess, Samuel T.

    2015-01-01

    Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined. PMID:26185614

  6. Bleed-through correction for rendering and correlation analysis in multi-colour localization microscopy.

    PubMed

    Kim, Dahan; Curthoys, Nikki M; Parent, Matthew T; Hess, Samuel T

    2013-09-01

    Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined.

  7. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten

    2016-06-08

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less

  8. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    NASA Astrophysics Data System (ADS)

    Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang

    2016-06-01

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.

  9. SU-E-I-100: Heterogeneity Studying for Primary and Lymphoma Tumors by Using Multi-Scale Image Texture Analysis with PET-CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Wang, Qinfen; Li, H

    Purpose: The purpose of this research is studying tumor heterogeneity of the primary and lymphoma by using multi-scale texture analysis with PET-CT images, where the tumor heterogeneity is expressed by texture features. Methods: Datasets were collected from 12 lung cancer patients, and both of primary and lymphoma tumors were detected with all these patients. All patients underwent whole-body 18F-FDG PET/CT scan before treatment.The regions of interest (ROI) of primary and lymphoma tumor were contoured by experienced clinical doctors. Then the ROI of primary and lymphoma tumor is extracted automatically by using Matlab software. According to the geometry size of contourmore » structure, the images of tumor are decomposed by multi-scale method.Wavelet transform was performed on ROI structures within images by L layers sampling, and then wavelet sub-bands which have the same size of the original image are obtained. The number of sub-bands is 3L+1.The gray level co-occurrence matrix (GLCM) is calculated within different sub-bands, thenenergy, inertia, correlation and gray in-homogeneity were extracted from GLCM.Finally, heterogeneity statistical analysis was studied for primary and lymphoma tumor using the texture features. Results: Energy, inertia, correlation and gray in-homogeneity are calculated with our experiments for heterogeneity statistical analysis.Energy for primary and lymphomatumor is equal with the same patient, while gray in-homogeneity and inertia of primaryare 2.59595±0.00855, 0.6439±0.0007 respectively. Gray in-homogeneity and inertia of lymphoma are 2.60115±0.00635, 0.64435±0.00055 respectively. The experiments showed that the volume of lymphoma is smaller than primary tumor, but thegray in-homogeneity and inertia were higher than primary tumor with the same patient, and the correlation with lymphoma tumors is zero, while the correlation with primary tumor isslightly strong. Conclusion: This studying showed that there were effective heterogeneity differences between primary and lymphoma tumor by multi-scale image texture analysis. This work is supported by National Natural Science Foundation of China (No. 61201441), Research Fund for Excellent Young and Middle-aged Scientists of Shandong Province (No. BS2012DX038), Project of Shandong Province Higher Educational Science and Technology Program (No. J12LN23), Jinan youth science and technology star (No.20120109)« less

  10. Brain perfusion imaging using a Reconstruction-of-Difference (RoD) approach for cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Mow, M.; Zbijewski, W.; Sisniega, A.; Xu, J.; Dang, H.; Stayman, J. W.; Wang, X.; Foos, D. H.; Koliatsos, V.; Aygun, N.; Siewerdsen, J. H.

    2017-03-01

    Purpose: To improve the timely detection and treatment of intracranial hemorrhage or ischemic stroke, recent efforts include the development of cone-beam CT (CBCT) systems for perfusion imaging and new approaches to estimate perfusion parameters despite slow rotation speeds compared to multi-detector CT (MDCT) systems. This work describes development of a brain perfusion CBCT method using a reconstruction of difference (RoD) approach to enable perfusion imaging on a newly developed CBCT head scanner prototype. Methods: A new reconstruction approach using RoD with a penalized-likelihood framework was developed to image the temporal dynamics of vascular enhancement. A digital perfusion simulation was developed to give a realistic representation of brain anatomy, artifacts, noise, scanner characteristics, and hemo-dynamic properties. This simulation includes a digital brain phantom, time-attenuation curves and noise parameters, a novel forward projection method for improved computational efficiency, and perfusion parameter calculation. Results: Our results show the feasibility of estimating perfusion parameters from a set of images reconstructed from slow scans, sparse data sets, and arc length scans as short as 60 degrees. The RoD framework significantly reduces noise and time-varying artifacts from inconsistent projections. Proper regularization and the use of overlapping reconstructed arcs can potentially further decrease bias and increase temporal resolution, respectively. Conclusions: A digital brain perfusion simulation with RoD imaging approach has been developed and supports the feasibility of using a CBCT head scanner for perfusion imaging. Future work will include testing with data acquired using a 3D-printed perfusion phantom currently and translation to preclinical and clinical studies.

  11. SU-E-T-113: Dose Distribution Using Respiratory Signals and Machine Parameters During Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imae, T; Haga, A; Saotome, N

    Purpose: Volumetric modulated arc therapy (VMAT) is a rotational intensity-modulated radiotherapy (IMRT) technique capable of acquiring projection images during treatment. Treatment plans for lung tumors using stereotactic body radiotherapy (SBRT) are calculated with planning computed tomography (CT) images only exhale phase. Purpose of this study is to evaluate dose distribution by reconstructing from only the data such as respiratory signals and machine parameters acquired during treatment. Methods: Phantom and three patients with lung tumor underwent CT scans for treatment planning. They were treated by VMAT while acquiring projection images to derive their respiratory signals and machine parameters including positions ofmore » multi leaf collimators, dose rates and integrated monitor units. The respiratory signals were divided into 4 and 10 phases and machine parameters were correlated with the divided respiratory signals based on the gantry angle. Dose distributions of each respiratory phase were calculated from plans which were reconstructed from the respiratory signals and the machine parameters during treatment. The doses at isocenter, maximum point and the centroid of target were evaluated. Results and Discussion: Dose distributions during treatment were calculated using the machine parameters and the respiratory signals detected from projection images. Maximum dose difference between plan and in treatment distribution was −1.8±0.4% at centroid of target and dose differences of evaluated points between 4 and 10 phases were no significant. Conclusion: The present method successfully evaluated dose distribution using respiratory signals and machine parameters during treatment. This method is feasible to verify the actual dose for moving target.« less

  12. Monoplane 3D-2D registration of cerebral angiograms based on multi-objective stratified optimization

    NASA Astrophysics Data System (ADS)

    Aksoy, T.; Špiclin, Ž.; Pernuš, F.; Unal, G.

    2017-12-01

    Registration of 3D pre-interventional to 2D intra-interventional medical images has an increasingly important role in surgical planning, navigation and treatment, because it enables the physician to co-locate depth information given by pre-interventional 3D images with the live information in intra-interventional 2D images such as x-ray. Most tasks during image-guided interventions are carried out under a monoplane x-ray, which is a highly ill-posed problem for state-of-the-art 3D to 2D registration methods. To address the problem of rigid 3D-2D monoplane registration we propose a novel multi-objective stratified parameter optimization, wherein a small set of high-magnitude intensity gradients are matched between the 3D and 2D images. The stratified parameter optimization matches rotation templates to depth templates, first sampled from projected 3D gradients and second from the 2D image gradients, so as to recover 3D rigid-body rotations and out-of-plane translation. The objective for matching was the gradient magnitude correlation coefficient, which is invariant to in-plane translation. The in-plane translations are then found by locating the maximum of the gradient phase correlation between the best matching pair of rotation and depth templates. On twenty pairs of 3D and 2D images of ten patients undergoing cerebral endovascular image-guided intervention the 3D to monoplane 2D registration experiments were setup with a rather high range of initial mean target registration error from 0 to 100 mm. The proposed method effectively reduced the registration error to below 2 mm, which was further refined by a fast iterative method and resulted in a high final registration accuracy (0.40 mm) and high success rate (> 96%). Taking into account a fast execution time below 10 s, the observed performance of the proposed method shows a high potential for application into clinical image-guidance systems.

  13. Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo

    2017-08-01

    This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.

  14. AFFINE-CORRECTED PARADISE: FREE-BREATHING PATIENT-ADAPTIVE CARDIAC MRI WITH SENSITIVITY ENCODING

    PubMed Central

    Sharif, Behzad; Bresler, Yoram

    2013-01-01

    We propose a real-time cardiac imaging method with parallel MRI that allows for free breathing during imaging and does not require cardiac or respiratory gating. The method is based on the recently proposed PARADISE (Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding) scheme. The new acquisition method adapts the PARADISE k-t space sampling pattern according to an affine model of the respiratory motion. The reconstruction scheme involves multi-channel time-sequential imaging with time-varying channels. All model parameters are adapted to the imaged patient as part of the experiment and drive both data acquisition and cine reconstruction. Simulated cardiac MRI experiments using the realistic NCAT phantom show high quality cine reconstructions and robustness to modeling inaccuracies. PMID:24390159

  15. Multi-institutional Feasibility Study of a Fast Patient Localization Method in Total Marrow Irradiation With Helical Tomotherapy: A Global Health Initiative by the International Consortium of Total Marrow Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, Yutaka; Vagge, Stefano; Agostinelli, Stefano

    2015-01-01

    Purpose: To develop, characterize, and implement a fast patient localization method for total marrow irradiation. Methods and Materials: Topographic images were acquired using megavoltage computed tomography (MVCT) detector data by delivering static orthogonal beams while the couch traversed through the gantry. Geometric and detector response corrections were performed to generate a megavoltage topogram (MVtopo). We also generated kilovoltage topograms (kVtopo) from the projection data of 3-dimensional CT images to reproduce the same geometry as helical tomotherapy. The MVtopo imaging dose and the optimal image acquisition parameters were investigated. A multi-institutional phantom study was performed to verify the image registration uncertainty. Forty-fivemore » MVtopo images were acquired and analyzed with in-house image registration software. Results: The smallest jaw size (front and backup jaws of 0) provided the best image contrast and longitudinal resolution. Couch velocity did not affect the image quality or geometric accuracy. The MVtopo dose was less than the MVCT dose. The image registration uncertainty from the multi-institutional study was within 2.8 mm. In patient localization, the differences in calculated couch shift between the registration with MVtopo-kVtopo and MVCT-kVCT images in lateral, cranial–caudal, and vertical directions were 2.2 ± 1.7 mm, 2.6 ± 1.4 mm, and 2.7 ± 1.1 mm, respectively. The imaging time in MVtopo acquisition at the couch speed of 3 cm/s was <1 minute, compared with ≥15 minutes in MVCT for all patients. Conclusion: Whole-body MVtopo imaging could be an effective alternative to time-consuming MVCT for total marrow irradiation patient localization.« less

  16. Multi-phase classification by a least-squares support vector machine approach in tomography images of geological samples

    NASA Astrophysics Data System (ADS)

    Khan, Faisal; Enzmann, Frieder; Kersten, Michael

    2016-03-01

    Image processing of X-ray-computed polychromatic cone-beam micro-tomography (μXCT) data of geological samples mainly involves artefact reduction and phase segmentation. For the former, the main beam-hardening (BH) artefact is removed by applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. A Matlab code for this approach is provided in the Appendix. The final BH-corrected image is extracted from the residual data or from the difference between the surface elevation values and the original grey-scale values. For the segmentation, we propose a novel least-squares support vector machine (LS-SVM, an algorithm for pixel-based multi-phase classification) approach. A receiver operating characteristic (ROC) analysis was performed on BH-corrected and uncorrected samples to show that BH correction is in fact an important prerequisite for accurate multi-phase classification. The combination of the two approaches was thus used to classify successfully three different more or less complex multi-phase rock core samples.

  17. Extraction and Analysis of Major Autumn Crops in Jingxian County Based on Multi - Temporal gf - 1 Remote Sensing Image and Object-Oriented

    NASA Astrophysics Data System (ADS)

    Ren, B.; Wen, Q.; Zhou, H.; Guan, F.; Li, L.; Yu, H.; Wang, Z.

    2018-04-01

    The purpose of this paper is to provide decision support for the adjustment and optimization of crop planting structure in Jingxian County. The object-oriented information extraction method is used to extract corn and cotton from Jingxian County of Hengshui City in Hebei Province, based on multi-period GF-1 16-meter images. The best time of data extraction was screened by analyzing the spectral characteristics of corn and cotton at different growth stages based on multi-period GF-116-meter images, phenological data, and field survey data. The results showed that the total classification accuracy of corn and cotton was up to 95.7 %, the producer accuracy was 96 % and 94 % respectively, and the user precision was 95.05 % and 95.9 % respectively, which satisfied the demand of crop monitoring application. Therefore, combined with multi-period high-resolution images and object-oriented classification can be a good extraction of large-scale distribution of crop information for crop monitoring to provide convenient and effective technical means.

  18. Analysis on the multi-dimensional spectrum of the thrust force for the linear motor feed drive system in machine tools

    NASA Astrophysics Data System (ADS)

    Yang, Xiaojun; Lu, Dun; Ma, Chengfang; Zhang, Jun; Zhao, Wanhua

    2017-01-01

    The motor thrust force has lots of harmonic components due to the nonlinearity of drive circuit and motor itself in the linear motor feed drive system. What is more, in the motion process, these thrust force harmonics may vary with the position, velocity, acceleration and load, which affects the displacement fluctuation of the feed drive system. Therefore, in this paper, on the basis of the thrust force spectrum obtained by the Maxwell equation and the electromagnetic energy method, the multi-dimensional variation of each thrust harmonic is analyzed under different motion parameters. Then the model of the servo system is established oriented to the dynamic precision. The influence of the variation of the thrust force spectrum on the displacement fluctuation is discussed. At last the experiments are carried out to verify the theoretical analysis above. It can be found that the thrust harmonics show multi-dimensional spectrum characteristics under different motion parameters and loads, which should be considered to choose the motion parameters and optimize the servo control parameters in the high-speed and high-precision machine tools equipped with the linear motor feed drive system.

  19. Performance of U-net based pyramidal lucas-kanade registration on free-breathing multi-b-value diffusion MRI of the kidney.

    PubMed

    Lv, Jun; Huang, Wenjian; Zhang, Jue; Wang, Xiaoying

    2018-06-01

    In free-breathing multi-b-value diffusion-weighted imaging (DWI), a series of images typically requires several minutes to collect. During respiration the kidney is routinely displaced and may also undergo deformation. These respiratory motion effects generate artifacts and these are the main sources of error in the quantification of intravoxel incoherent motion (IVIM) derived parameters. This work proposes a fully automated framework that combines a kidney segmentation to improve the registration accuracy. 10 healthy subjects were recruited to participate in this experiment. For the segmentation, U-net was adopted to acquire the kidney's contour. The segmented kidney then served as a region of interest (ROI) for the registration method, known as pyramidal Lucas-Kanade. Our proposed framework confines the kidney's solution range, thus increasing the pyramidal Lucas-Kanade's accuracy. To demonstrate the feasibility of our presented framework, eight regions of interest were selected in the cortex and medulla, and data stability was estimated by comparing the normalized root-mean-square error (NRMSE) values of the fitted data from the bi-exponential intravoxel incoherent motion model pre- and post- registration. The results show that the NRMSE was significantly lower after registration both in the cortex (p < 0.05) and medulla (p < 0.01) during free-breathing measurements. In addition, expert visual scoring of the derived apparent diffusion coefficient (ADC), f, D and D* maps indicated there were significant improvements in the alignment of the kidney in the post-registered image. The proposed framework can effectively reduce the motion artifacts of misaligned multi-b-value DWIs and the inaccuracies of the ADC, f, D and D* estimations. Advances in knowledge: This study demonstrates the feasibility of our proposed fully automated framework combining U-net based segmentation and pyramidal Lucas-Kanade registration method for improving the alignment of multi-b-value diffusion-weighted MRIs and reducing the inaccuracy of parameter estimation during free-breathing.

  20. Use of EO-1 Advanced Land Imager (ALI) multispectral image data and real-time field sampling for water quality mapping in the Hirfanlı Dam Lake, Turkey.

    PubMed

    Kavurmacı, Murat; Ekercin, Semih; Altaş, Levent; Kurmaç, Yakup

    2013-08-01

    This paper focuses on the evaluation of water quality variations in Hirfanlı Water Reservoir, which is one of the most important water resources in Turkey, through EO-1 (Earth Observing-1) Advanced Land Imager (ALI) multispectral data and real-time field sampling. The study was materialized in 20 different sampling points during the overpass of the EO-1 ALI sensor over the study area. A multi-linear regression technique was used to explore the relationships between radiometrically corrected EO-1 ALI image data and water quality parameters: chlorophyll a, turbidity, and suspended solids. The retrieved and verified results show that the measured and estimated values of water quality parameters are in good agreement (R (2) >0.93). The resulting thematic maps derived from EO-1 multispectral data for chlorophyll a, turbidity, and suspended solids show the spatial distribution of the water quality parameters. The results indicate that the reservoir has average nutrient values. Furthermore, chlorophyll a, turbidity, and suspended solids values increased at the upstream reservoir and shallow coast of the Hirfanlı Water Reservoir.

Top