Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M
2014-12-01
To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.
Referenceless perceptual fog density prediction model
NASA Astrophysics Data System (ADS)
Choi, Lark Kwon; You, Jaehee; Bovik, Alan C.
2014-02-01
We propose a perceptual fog density prediction model based on natural scene statistics (NSS) and "fog aware" statistical features, which can predict the visibility in a foggy scene from a single image without reference to a corresponding fogless image, without side geographical camera information, without training on human-rated judgments, and without dependency on salient objects such as lane markings or traffic signs. The proposed fog density predictor only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. A fog aware collection of statistical features is derived from a corpus of foggy and fog-free images by using a space domain NSS model and observed characteristics of foggy images such as low contrast, faint color, and shifted intensity. The proposed model not only predicts perceptual fog density for the entire image but also provides a local fog density index for each patch. The predicted fog density of the model correlates well with the measured visibility in a foggy scene as measured by judgments taken in a human subjective study on a large foggy image database. As one application, the proposed model accurately evaluates the performance of defog algorithms designed to enhance the visibility of foggy images.
Ultrasound image filtering using the mutiplicative model
NASA Astrophysics Data System (ADS)
Navarrete, Hugo; Frery, Alejandro C.; Sanchez, Fermin; Anto, Joan
2002-04-01
Ultrasound images, as a special case of coherent images, are normally corrupted with multiplicative noise i.e. speckle noise. Speckle noise reduction is a difficult task due to its multiplicative nature, but good statistical models of speckle formation are useful to design adaptive speckle reduction filters. In this article a new statistical model, emerging from the Multiplicative Model framework, is presented and compared to previous models (Rayleigh, Rice and K laws). It is shown that the proposed model gives the best performance when modeling the statistics of ultrasound images. Finally, the parameters of the model can be used to quantify the extent of speckle formation; this quantification is applied to adaptive speckle reduction filter design. The effectiveness of the filter is demonstrated on typical in-vivo log-compressed B-scan images obtained by a clinical ultrasound system.
Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K
2014-12-01
An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.
Camera-Model Identification Using Markovian Transition Probability Matrix
NASA Astrophysics Data System (ADS)
Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei
Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.
A statistical parts-based appearance model of inter-subject variability.
Toews, Matthew; Collins, D Louis; Arbel, Tal
2006-01-01
In this article, we present a general statistical parts-based model for representing the appearance of an image set, applied to the problem of inter-subject MR brain image matching. In contrast with global image representations such as active appearance models, the parts-based model consists of a collection of localized image parts whose appearance, geometry and occurrence frequency are quantified statistically. The parts-based approach explicitly addresses the case where one-to-one correspondence does not exist between subjects due to anatomical differences, as parts are not expected to occur in all subjects. The model can be learned automatically, discovering structures that appear with statistical regularity in a large set of subject images, and can be robustly fit to new images, all in the presence of significant inter-subject variability. As parts are derived from generic scale-invariant features, the framework can be applied in a wide variety of image contexts, in order to study the commonality of anatomical parts or to group subjects according to the parts they share. Experimentation shows that a parts-based model can be learned from a large set of MR brain images, and used to determine parts that are common within the group of subjects. Preliminary results indicate that the model can be used to automatically identify distinctive features for inter-subject image registration despite large changes in appearance.
Visualization of the variability of 3D statistical shape models by animation.
Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter
2004-01-01
Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
NASA Astrophysics Data System (ADS)
Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.
2017-03-01
In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.
Feature maps driven no-reference image quality prediction of authentically distorted images
NASA Astrophysics Data System (ADS)
Ghadiyaram, Deepti; Bovik, Alan C.
2015-03-01
Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.
Statistical Signal Models and Algorithms for Image Analysis
1984-10-25
In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction
DETECTING UNSPECIFIED STRUCTURE IN LOW-COUNT IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Nathan M.; Dyk, David A. van; Kashyap, Vinay L.
Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tailmore » probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet.« less
PET image reconstruction: a robust state space approach.
Liu, Huafeng; Tian, Yi; Shi, Pengcheng
2005-01-01
Statistical iterative reconstruction algorithms have shown improved image quality over conventional nonstatistical methods in PET by using accurate system response models and measurement noise models. Strictly speaking, however, PET measurements, pre-corrected for accidental coincidences, are neither Poisson nor Gaussian distributed and thus do not meet basic assumptions of these algorithms. In addition, the difficulty in determining the proper system response model also greatly affects the quality of the reconstructed images. In this paper, we explore the usage of state space principles for the estimation of activity map in tomographic PET imaging. The proposed strategy formulates the organ activity distribution through tracer kinetics models, and the photon-counting measurements through observation equations, thus makes it possible to unify the dynamic reconstruction problem and static reconstruction problem into a general framework. Further, it coherently treats the uncertainties of the statistical model of the imaging system and the noisy nature of measurement data. Since H(infinity) filter seeks minimummaximum-error estimates without any assumptions on the system and data noise statistics, it is particular suited for PET image reconstruction where the statistical properties of measurement data and the system model are very complicated. The performance of the proposed framework is evaluated using Shepp-Logan simulated phantom data and real phantom data with favorable results.
Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David
2015-01-01
Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.
Ecological statistics of Gestalt laws for the perceptual organization of contours.
Elder, James H; Goldberg, Richard M
2002-01-01
Although numerous studies have measured the strength of visual grouping cues for controlled psychophysical stimuli, little is known about the statistical utility of these various cues for natural images. In this study, we conducted experiments in which human participants trace perceived contours in natural images. These contours are automatically mapped to sequences of discrete tangent elements detected in the image. By examining relational properties between pairs of successive tangents on these traced curves, and between randomly selected pairs of tangents, we are able to estimate the likelihood distributions required to construct an optimal Bayesian model for contour grouping. We employed this novel methodology to investigate the inferential power of three classical Gestalt cues for contour grouping: proximity, good continuation, and luminance similarity. The study yielded a number of important results: (1) these cues, when appropriately defined, are approximately uncorrelated, suggesting a simple factorial model for statistical inference; (2) moderate image-to-image variation of the statistics indicates the utility of general probabilistic models for perceptual organization; (3) these cues differ greatly in their inferential power, proximity being by far the most powerful; and (4) statistical modeling of the proximity cue indicates a scale-invariant power law in close agreement with prior psychophysics.
Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam
2017-01-01
The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t -test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis.
NASA Technical Reports Server (NTRS)
Abbey, Craig K.; Eckstein, Miguel P.
2002-01-01
We consider estimation and statistical hypothesis testing on classification images obtained from the two-alternative forced-choice experimental paradigm. We begin with a probabilistic model of task performance for simple forced-choice detection and discrimination tasks. Particular attention is paid to general linear filter models because these models lead to a direct interpretation of the classification image as an estimate of the filter weights. We then describe an estimation procedure for obtaining classification images from observer data. A number of statistical tests are presented for testing various hypotheses from classification images based on some more compact set of features derived from them. As an example of how the methods we describe can be used, we present a case study investigating detection of a Gaussian bump profile.
Emerging Techniques for Dose Optimization in Abdominal CT
Platt, Joel F.; Goodsitt, Mitchell M.; Al-Hawary, Mahmoud M.; Maturen, Katherine E.; Wasnik, Ashish P.; Pandya, Amit
2014-01-01
Recent advances in computed tomographic (CT) scanning technique such as automated tube current modulation (ATCM), optimized x-ray tube voltage, and better use of iterative image reconstruction have allowed maintenance of good CT image quality with reduced radiation dose. ATCM varies the tube current during scanning to account for differences in patient attenuation, ensuring a more homogeneous image quality, although selection of the appropriate image quality parameter is essential for achieving optimal dose reduction. Reducing the x-ray tube voltage is best suited for evaluating iodinated structures, since the effective energy of the x-ray beam will be closer to the k-edge of iodine, resulting in a higher attenuation for the iodine. The optimal kilovoltage for a CT study should be chosen on the basis of imaging task and patient habitus. The aim of iterative image reconstruction is to identify factors that contribute to noise on CT images with use of statistical models of noise (statistical iterative reconstruction) and selective removal of noise to improve image quality. The degree of noise suppression achieved with statistical iterative reconstruction can be customized to minimize the effect of altered image quality on CT images. Unlike with statistical iterative reconstruction, model-based iterative reconstruction algorithms model both the statistical noise and the physical acquisition process, allowing CT to be performed with further reduction in radiation dose without an increase in image noise or loss of spatial resolution. Understanding these recently developed scanning techniques is essential for optimization of imaging protocols designed to achieve the desired image quality with a reduced dose. © RSNA, 2014 PMID:24428277
Statistical appearance models based on probabilistic correspondences.
Krüger, Julia; Ehrhardt, Jan; Handels, Heinz
2017-04-01
Model-based image analysis is indispensable in medical image processing. One key aspect of building statistical shape and appearance models is the determination of one-to-one correspondences in the training data set. At the same time, the identification of these correspondences is the most challenging part of such methods. In our earlier work, we developed an alternative method using correspondence probabilities instead of exact one-to-one correspondences for a statistical shape model (Hufnagel et al., 2008). In this work, a new approach for statistical appearance models without one-to-one correspondences is proposed. A sparse image representation is used to build a model that combines point position and appearance information at the same time. Probabilistic correspondences between the derived multi-dimensional feature vectors are used to omit the need for extensive preprocessing of finding landmarks and correspondences as well as to reduce the dependence of the generated model on the landmark positions. Model generation and model fitting can now be expressed by optimizing a single global criterion derived from a maximum a-posteriori (MAP) approach with respect to model parameters that directly affect both shape and appearance of the considered objects inside the images. The proposed approach describes statistical appearance modeling in a concise and flexible mathematical framework. Besides eliminating the demand for costly correspondence determination, the method allows for additional constraints as topological regularity in the modeling process. In the evaluation the model was applied for segmentation and landmark identification in hand X-ray images. The results demonstrate the feasibility of the model to detect hand contours as well as the positions of the joints between finger bones for unseen test images. Further, we evaluated the model on brain data of stroke patients to show the ability of the proposed model to handle partially corrupted data and to demonstrate a possible employment of the correspondence probabilities to indicate these corrupted/pathological areas. Copyright © 2017 Elsevier B.V. All rights reserved.
Regression Models for Identifying Noise Sources in Magnetic Resonance Images
Zhu, Hongtu; Li, Yimei; Ibrahim, Joseph G.; Shi, Xiaoyan; An, Hongyu; Chen, Yashen; Gao, Wei; Lin, Weili; Rowe, Daniel B.; Peterson, Bradley S.
2009-01-01
Stochastic noise, susceptibility artifacts, magnetic field and radiofrequency inhomogeneities, and other noise components in magnetic resonance images (MRIs) can introduce serious bias into any measurements made with those images. We formally introduce three regression models including a Rician regression model and two associated normal models to characterize stochastic noise in various magnetic resonance imaging modalities, including diffusion-weighted imaging (DWI) and functional MRI (fMRI). Estimation algorithms are introduced to maximize the likelihood function of the three regression models. We also develop a diagnostic procedure for systematically exploring MR images to identify noise components other than simple stochastic noise, and to detect discrepancies between the fitted regression models and MRI data. The diagnostic procedure includes goodness-of-fit statistics, measures of influence, and tools for graphical display. The goodness-of-fit statistics can assess the key assumptions of the three regression models, whereas measures of influence can isolate outliers caused by certain noise components, including motion artifacts. The tools for graphical display permit graphical visualization of the values for the goodness-of-fit statistic and influence measures. Finally, we conduct simulation studies to evaluate performance of these methods, and we analyze a real dataset to illustrate how our diagnostic procedure localizes subtle image artifacts by detecting intravoxel variability that is not captured by the regression models. PMID:19890478
Zhou, Xiangrong; Xu, Rui; Hara, Takeshi; Hirano, Yasushi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Kido, Shoji; Fujita, Hiroshi
2014-07-01
The shapes of the inner organs are important information for medical image analysis. Statistical shape modeling provides a way of quantifying and measuring shape variations of the inner organs in different patients. In this study, we developed a universal scheme that can be used for building the statistical shape models for different inner organs efficiently. This scheme combines the traditional point distribution modeling with a group-wise optimization method based on a measure called minimum description length to provide a practical means for 3D organ shape modeling. In experiments, the proposed scheme was applied to the building of five statistical shape models for hearts, livers, spleens, and right and left kidneys by use of 50 cases of 3D torso CT images. The performance of these models was evaluated by three measures: model compactness, model generalization, and model specificity. The experimental results showed that the constructed shape models have good "compactness" and satisfied the "generalization" performance for different organ shape representations; however, the "specificity" of these models should be improved in the future.
Predicting perceptual quality of images in realistic scenario using deep filter banks
NASA Astrophysics Data System (ADS)
Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang
2018-03-01
Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.
Shu, Jie; Dolman, G E; Duan, Jiang; Qiu, Guoping; Ilyas, Mohammad
2016-04-27
Colour is the most important feature used in quantitative immunohistochemistry (IHC) image analysis; IHC is used to provide information relating to aetiology and to confirm malignancy. Statistical modelling is a technique widely used for colour detection in computer vision. We have developed a statistical model of colour detection applicable to detection of stain colour in digital IHC images. Model was first trained by massive colour pixels collected semi-automatically. To speed up the training and detection processes, we removed luminance channel, Y channel of YCbCr colour space and chose 128 histogram bins which is the optimal number. A maximum likelihood classifier is used to classify pixels in digital slides into positively or negatively stained pixels automatically. The model-based tool was developed within ImageJ to quantify targets identified using IHC and histochemistry. The purpose of evaluation was to compare the computer model with human evaluation. Several large datasets were prepared and obtained from human oesophageal cancer, colon cancer and liver cirrhosis with different colour stains. Experimental results have demonstrated the model-based tool achieves more accurate results than colour deconvolution and CMYK model in the detection of brown colour, and is comparable to colour deconvolution in the detection of pink colour. We have also demostrated the proposed model has little inter-dataset variations. A robust and effective statistical model is introduced in this paper. The model-based interactive tool in ImageJ, which can create a visual representation of the statistical model and detect a specified colour automatically, is easy to use and available freely at http://rsb.info.nih.gov/ij/plugins/ihc-toolbox/index.html . Testing to the tool by different users showed only minor inter-observer variations in results.
A statistical model for radar images of agricultural scenes
NASA Technical Reports Server (NTRS)
Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.
1982-01-01
The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.
Liver segmentation from CT images using a sparse priori statistical shape model (SP-SSM).
Wang, Xuehu; Zheng, Yongchang; Gan, Lan; Wang, Xuan; Sang, Xinting; Kong, Xiangfeng; Zhao, Jie
2017-01-01
This study proposes a new liver segmentation method based on a sparse a priori statistical shape model (SP-SSM). First, mark points are selected in the liver a priori model and the original image. Then, the a priori shape and its mark points are used to obtain a dictionary for the liver boundary information. Second, the sparse coefficient is calculated based on the correspondence between mark points in the original image and those in the a priori model, and then the sparse statistical model is established by combining the sparse coefficients and the dictionary. Finally, the intensity energy and boundary energy models are built based on the intensity information and the specific boundary information of the original image. Then, the sparse matching constraint model is established based on the sparse coding theory. These models jointly drive the iterative deformation of the sparse statistical model to approximate and accurately extract the liver boundaries. This method can solve the problems of deformation model initialization and a priori method accuracy using the sparse dictionary. The SP-SSM can achieve a mean overlap error of 4.8% and a mean volume difference of 1.8%, whereas the average symmetric surface distance and the root mean square symmetric surface distance can reach 0.8 mm and 1.4 mm, respectively.
A generalized Benford's law for JPEG coefficients and its applications in image forensics
NASA Astrophysics Data System (ADS)
Fu, Dongdong; Shi, Yun Q.; Su, Wei
2007-02-01
In this paper, a novel statistical model based on Benford's law for the probability distributions of the first digits of the block-DCT and quantized JPEG coefficients is presented. A parametric logarithmic law, i.e., the generalized Benford's law, is formulated. Furthermore, some potential applications of this model in image forensics are discussed in this paper, which include the detection of JPEG compression for images in bitmap format, the estimation of JPEG compression Qfactor for JPEG compressed bitmap image, and the detection of double compressed JPEG image. The results of our extensive experiments demonstrate the effectiveness of the proposed statistical model.
Gowd, Snigdha; Shankar, T; Dash, Samarendra; Sahoo, Nivedita; Chatterjee, Suravi; Mohanty, Pritam
2017-01-01
Aims and Objective: The aim of the study was to evaluate the reliability of cone beam computed tomography (CBCT) obtained image over plaster model for the assessment of mixed dentition analysis. Materials and Methods: Thirty CBCT-derived images and thirty plaster models were derived from the dental archives, and Moyer's and Tanaka-Johnston analyses were performed. The data obtained were interpreted and analyzed statistically using SPSS 10.0/PC (SPSS Inc., Chicago, IL, USA). Descriptive and analytical analysis along with Student's t-test was performed to qualitatively evaluate the data and P < 0.05 was considered statistically significant. Results: Statistically, significant results were obtained on data comparison between CBCT-derived images and plaster model; the mean for Moyer's analysis in the left and right lower arch for CBCT and plaster model was 21.2 mm, 21.1 mm and 22.5 mm, 22.5 mm, respectively. Conclusion: CBCT-derived images were less reliable as compared to data obtained directly from plaster model for mixed dentition analysis. PMID:28852639
Infinitely divisible cascades to model the statistics of natural images.
Chainais, Pierre
2007-12-01
We propose to model the statistics of natural images thanks to the large class of stochastic processes called Infinitely Divisible Cascades (IDC). IDC were first introduced in one dimension to provide multifractal time series to model the so-called intermittency phenomenon in hydrodynamical turbulence. We have extended the definition of scalar infinitely divisible cascades from 1 to N dimensions and commented on the relevance of such a model in fully developed turbulence in [1]. In this article, we focus on the particular 2 dimensional case. IDC appear as good candidates to model the statistics of natural images. They share most of their usual properties and appear to be consistent with several independent theoretical and experimental approaches of the literature. We point out the interest of IDC for applications to procedural texture synthesis.
NASA Technical Reports Server (NTRS)
Tilley, David G.
1988-01-01
The surface wave field produced by Hurricane Josephine was imaged by the L-band SAR aboard the Challenger on October 12, 1984. Exponential trends found in the two-dimensional autocorrelations of speckled image data support an equilibrium theory model of sea surface hydrodynamics. The notions of correlated specular reflection, surface coherence, optimal Doppler parameterization and spatial resolution are discussed within the context of a Poisson-Rayleigh statistical model of the SAR imaging process.
Nakagami-based total variation method for speckle reduction in thyroid ultrasound images.
Koundal, Deepika; Gupta, Savita; Singh, Sukhwinder
2016-02-01
A good statistical model is necessary for the reduction in speckle noise. The Nakagami model is more general than the Rayleigh distribution for statistical modeling of speckle in ultrasound images. In this article, the Nakagami-based noise removal method is presented to enhance thyroid ultrasound images and to improve clinical diagnosis. The statistics of log-compressed image are derived from the Nakagami distribution following a maximum a posteriori estimation framework. The minimization problem is solved by optimizing an augmented Lagrange and Chambolle's projection method. The proposed method is evaluated on both artificial speckle-simulated and real ultrasound images. The experimental findings reveal the superiority of the proposed method both quantitatively and qualitatively in comparison with other speckle reduction methods reported in the literature. The proposed method yields an average signal-to-noise ratio gain of more than 2.16 dB over the non-convex regularizer-based speckle noise removal method, 3.83 dB over the Aubert-Aujol model, 1.71 dB over the Shi-Osher model and 3.21 dB over the Rudin-Lions-Osher model on speckle-simulated synthetic images. Furthermore, visual evaluation of the despeckled images shows that the proposed method suppresses speckle noise well while preserving the textures and fine details. © IMechE 2015.
Multivariate statistical model for 3D image segmentation with application to medical images.
John, Nigel M; Kabuka, Mansur R; Ibrahim, Mohamed O
2003-12-01
In this article we describe a statistical model that was developed to segment brain magnetic resonance images. The statistical segmentation algorithm was applied after a pre-processing stage involving the use of a 3D anisotropic filter along with histogram equalization techniques. The segmentation algorithm makes use of prior knowledge and a probability-based multivariate model designed to semi-automate the process of segmentation. The algorithm was applied to images obtained from the Center for Morphometric Analysis at Massachusetts General Hospital as part of the Internet Brain Segmentation Repository (IBSR). The developed algorithm showed improved accuracy over the k-means, adaptive Maximum Apriori Probability (MAP), biased MAP, and other algorithms. Experimental results showing the segmentation and the results of comparisons with other algorithms are provided. Results are based on an overlap criterion against expertly segmented images from the IBSR. The algorithm produced average results of approximately 80% overlap with the expertly segmented images (compared with 85% for manual segmentation and 55% for other algorithms).
Meteor localization via statistical analysis of spatially temporal fluctuations in image sequences
NASA Astrophysics Data System (ADS)
Kukal, Jaromír.; Klimt, Martin; Šihlík, Jan; Fliegel, Karel
2015-09-01
Meteor detection is one of the most important procedures in astronomical imaging. Meteor path in Earth's atmosphere is traditionally reconstructed from double station video observation system generating 2D image sequences. However, the atmospheric turbulence and other factors cause spatially-temporal fluctuations of image background, which makes the localization of meteor path more difficult. Our approach is based on nonlinear preprocessing of image intensity using Box-Cox and logarithmic transform as its particular case. The transformed image sequences are then differentiated along discrete coordinates to obtain statistical description of sky background fluctuations, which can be modeled by multivariate normal distribution. After verification and hypothesis testing, we use the statistical model for outlier detection. Meanwhile the isolated outlier points are ignored, the compact cluster of outliers indicates the presence of meteoroids after ignition.
Segmentation of prostate boundaries from ultrasound images using statistical shape model.
Shen, Dinggang; Zhan, Yiqiang; Davatzikos, Christos
2003-04-01
This paper presents a statistical shape model for the automatic prostate segmentation in transrectal ultrasound images. A Gabor filter bank is first used to characterize the prostate boundaries in ultrasound images in both multiple scales and multiple orientations. The Gabor features are further reconstructed to be invariant to the rotation of the ultrasound probe and incorporated in the prostate model as image attributes for guiding the deformable segmentation. A hierarchical deformation strategy is then employed, in which the model adaptively focuses on the similarity of different Gabor features at different deformation stages using a multiresolution technique, i.e., coarse features first and fine features later. A number of successful experiments validate the algorithm.
Task-based data-acquisition optimization for sparse image reconstruction systems
NASA Astrophysics Data System (ADS)
Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.
2017-03-01
Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.
Variational stereo imaging of oceanic waves with statistical constraints.
Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise
2013-11-01
An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.
Image statistics underlying natural texture selectivity of neurons in macaque V4
Okazawa, Gouki; Tajima, Satohiro; Komatsu, Hidehiko
2015-01-01
Our daily visual experiences are inevitably linked to recognizing the rich variety of textures. However, how the brain encodes and differentiates a plethora of natural textures remains poorly understood. Here, we show that many neurons in macaque V4 selectively encode sparse combinations of higher-order image statistics to represent natural textures. We systematically explored neural selectivity in a high-dimensional texture space by combining texture synthesis and efficient-sampling techniques. This yielded parameterized models for individual texture-selective neurons. The models provided parsimonious but powerful predictors for each neuron’s preferred textures using a sparse combination of image statistics. As a whole population, the neuronal tuning was distributed in a way suitable for categorizing textures and quantitatively predicts human ability to discriminate textures. Together, we suggest that the collective representation of visual image statistics in V4 plays a key role in organizing the natural texture perception. PMID:25535362
A semiempirical linear model of indirect, flat-panel x-ray detectors.
Huang, Shih-Ying; Yang, Kai; Abbey, Craig K; Boone, John M
2012-04-01
It is important to understand signal and noise transfer in the indirect, flat-panel x-ray detector when developing and optimizing imaging systems. For optimization where simulating images is necessary, this study introduces a semiempirical model to simulate projection images with user-defined x-ray fluence interaction. The signal and noise transfer in the indirect, flat-panel x-ray detectors is characterized by statistics consistent with energy-integration of x-ray photons. For an incident x-ray spectrum, x-ray photons are attenuated and absorbed in the x-ray scintillator to produce light photons, which are coupled to photodiodes for signal readout. The signal mean and variance are linearly related to the energy-integrated x-ray spectrum by empirically determined factors. With the known first- and second-order statistics, images can be simulated by incorporating multipixel signal statistics and the modulation transfer function of the imaging system. To estimate the semiempirical input to this model, 500 projection images (using an indirect, flat-panel x-ray detector in the breast CT system) were acquired with 50-100 kilovolt (kV) x-ray spectra filtered with 0.1-mm tin (Sn), 0.2-mm copper (Cu), 1.5-mm aluminum (Al), or 0.05-mm silver (Ag). The signal mean and variance of each detector element and the noise power spectra (NPS) were calculated and incorporated into this model for accuracy. Additionally, the modulation transfer function of the detector system was physically measured and incorporated in the image simulation steps. For validation purposes, simulated and measured projection images of air scans were compared using 40 kV∕0.1-mm Sn, 65 kV∕0.2-mm Cu, 85 kV∕1.5-mm Al, and 95 kV∕0.05-mm Ag. The linear relationship between the measured signal statistics and the energy-integrated x-ray spectrum was confirmed and incorporated into the model. The signal mean and variance factors were linearly related to kV for each filter material (r(2) of signal mean to kV: 0.91, 0.93, 0.86, and 0.99 for 0.1-mm Sn, 0.2-mm Cu, 1.5-mm Al, and 0.05-mm Ag, respectively; r(2) of signal variance to kV: 0.99 for all four filters). The comparison of the signal and noise (mean, variance, and NPS) between the simulated and measured air scan images suggested that this model was reasonable in predicting accurate signal statistics of air scan images using absolute percent error. Overall, the model was found to be accurate in estimating signal statistics and spatial correlation between the detector elements of the images acquired with indirect, flat-panel x-ray detectors. The semiempirical linear model of the indirect, flat-panel x-ray detectors was described and validated with images of air scans. The model was found to be a useful tool in understanding the signal and noise transfer within indirect, flat-panel x-ray detector systems.
Shi, Y; Qi, F; Xue, Z; Chen, L; Ito, K; Matsuo, H; Shen, D
2008-04-01
This paper presents a new deformable model using both population-based and patient-specific shape statistics to segment lung fields from serial chest radiographs. There are two novelties in the proposed deformable model. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than the general intensity and gradient features, is used to characterize the image features in the vicinity of each pixel. Second, the deformable contour is constrained by both population-based and patient-specific shape statistics, and it yields more robust and accurate segmentation of lung fields for serial chest radiographs. In particular, for segmenting the initial time-point images, the population-based shape statistics is used to constrain the deformable contour; as more subsequent images of the same patient are acquired, the patient-specific shape statistics online collected from the previous segmentation results gradually takes more roles. Thus, this patient-specific shape statistics is updated each time when a new segmentation result is obtained, and it is further used to refine the segmentation results of all the available time-point images. Experimental results show that the proposed method is more robust and accurate than other active shape models in segmenting the lung fields from serial chest radiographs.
Interactive classification and content-based retrieval of tissue images
NASA Astrophysics Data System (ADS)
Aksoy, Selim; Marchisio, Giovanni B.; Tusk, Carsten; Koperski, Krzysztof
2002-11-01
We describe a system for interactive classification and retrieval of microscopic tissue images. Our system models tissues in pixel, region and image levels. Pixel level features are generated using unsupervised clustering of color and texture values. Region level features include shape information and statistics of pixel level feature values. Image level features include statistics and spatial relationships of regions. To reduce the gap between low-level features and high-level expert knowledge, we define the concept of prototype regions. The system learns the prototype regions in an image collection using model-based clustering and density estimation. Different tissue types are modeled using spatial relationships of these regions. Spatial relationships are represented by fuzzy membership functions. The system automatically selects significant relationships from training data and builds models which can also be updated using user relevance feedback. A Bayesian framework is used to classify tissues based on these models. Preliminary experiments show that the spatial relationship models we developed provide a flexible and powerful framework for classification and retrieval of tissue images.
NASA Astrophysics Data System (ADS)
Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan
2018-03-01
Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.
NASA Astrophysics Data System (ADS)
Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai
2016-07-01
Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.
1997-09-30
Contours are in meters. Illuminati on is from the top of the image. Center of image is at ap proximately 39•25’ N, 73•oo·w. Figure 1 b. Conditional simulation of above image, using statistical model derived from the data.
Bayesian depth estimation from monocular natural images.
Su, Che-Chun; Cormack, Lawrence K; Bovik, Alan C
2017-05-01
Estimating an accurate and naturalistic dense depth map from a single monocular photographic image is a difficult problem. Nevertheless, human observers have little difficulty understanding the depth structure implied by photographs. Two-dimensional (2D) images of the real-world environment contain significant statistical information regarding the three-dimensional (3D) structure of the world that the vision system likely exploits to compute perceived depth, monocularly as well as binocularly. Toward understanding how this might be accomplished, we propose a Bayesian model of monocular depth computation that recovers detailed 3D scene structures by extracting reliable, robust, depth-sensitive statistical features from single natural images. These features are derived using well-accepted univariate natural scene statistics (NSS) models and recent bivariate/correlation NSS models that describe the relationships between 2D photographic images and their associated depth maps. This is accomplished by building a dictionary of canonical local depth patterns from which NSS features are extracted as prior information. The dictionary is used to create a multivariate Gaussian mixture (MGM) likelihood model that associates local image features with depth patterns. A simple Bayesian predictor is then used to form spatial depth estimates. The depth results produced by the model, despite its simplicity, correlate well with ground-truth depths measured by a current-generation terrestrial light detection and ranging (LIDAR) scanner. Such a strong form of statistical depth information could be used by the visual system when creating overall estimated depth maps incorporating stereopsis, accommodation, and other conditions. Indeed, even in isolation, the Bayesian predictor delivers depth estimates that are competitive with state-of-the-art "computer vision" methods that utilize highly engineered image features and sophisticated machine learning algorithms.
Erus, Guray; Zacharaki, Evangelia I; Davatzikos, Christos
2014-04-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a "target-specific" feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject's images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an "estimability" criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Erus, Guray; Zacharaki, Evangelia I.; Davatzikos, Christos
2014-01-01
This paper presents a method for capturing statistical variation of normal imaging phenotypes, with emphasis on brain structure. The method aims to estimate the statistical variation of a normative set of images from healthy individuals, and identify abnormalities as deviations from normality. A direct estimation of the statistical variation of the entire volumetric image is challenged by the high-dimensionality of images relative to smaller sample sizes. To overcome this limitation, we iteratively sample a large number of lower dimensional subspaces that capture image characteristics ranging from fine and localized to coarser and more global. Within each subspace, a “target-specific” feature selection strategy is applied to further reduce the dimensionality, by considering only imaging characteristics present in a test subject’s images. Marginal probability density functions of selected features are estimated through PCA models, in conjunction with an “estimability” criterion that limits the dimensionality of estimated probability densities according to available sample size and underlying anatomy variation. A test sample is iteratively projected to the subspaces of these marginals as determined by PCA models, and its trajectory delineates potential abnormalities. The method is applied to segmentation of various brain lesion types, and to simulated data on which superiority of the iterative method over straight PCA is demonstrated. PMID:24607564
An adaptive multi-feature segmentation model for infrared image
NASA Astrophysics Data System (ADS)
Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa
2016-04-01
Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.
Summary Statistics for Homemade ?Play Dough? -- Data Acquired at LLNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kallman, J S; Morales, K E; Whipple, R E
Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a homemade Play Dough{trademark}-like material, designated as PDA. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2700 LMHU{sub D} 100kVp to a low of about 1200 LMHUD at 300kVp. The standard deviation of each measurement is around 10% to 15% of the mean. The entropy covers the range from 6.0 to 7.4. Ordinarily, we would model the LAC of themore » material and compare the modeled values to the measured values. In this case, however, we did not have the detailed chemical composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 10. LLNL prepared about 50mL of the homemade 'Play Dough' in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference between the initial image and that same image offset by one voxel horizontally, parallel to the rows of the x-ray detector array.) The statistics of the initial image of LAC values are called 'first order statistics;' those of the gradient image, 'second order statistics.'« less
Vegas-Sanchez-Ferrero, G; Aja-Fernandez, S; Martin-Fernandez, M; Frangi, A F; Palencia, C
2010-01-01
A novel anisotropic diffusion filter is proposed in this work with application to cardiac ultrasonic images. It includes probabilistic models which describe the probability density function (PDF) of tissues and adapts the diffusion tensor to the image iteratively. For this purpose, a preliminary study is performed in order to select the probability models that best fit the stastitical behavior of each tissue class in cardiac ultrasonic images. Then, the parameters of the diffusion tensor are defined taking into account the statistical properties of the image at each voxel. When the structure tensor of the probability of belonging to each tissue is included in the diffusion tensor definition, a better boundaries estimates can be obtained instead of calculating directly the boundaries from the image. This is the main contribution of this work. Additionally, the proposed method follows the statistical properties of the image in each iteration. This is considered as a second contribution since state-of-the-art methods suppose that noise or statistical properties of the image do not change during the filter process.
Machine learning to analyze images of shocked materials for precise and accurate measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dresselhaus-Cooper, Leora; Howard, Marylesa; Hock, Margaret C.
A supervised machine learning algorithm, called locally adaptive discriminant analysis (LADA), has been developed to locate boundaries between identifiable image features that have varying intensities. LADA is an adaptation of image segmentation, which includes techniques that find the positions of image features (classes) using statistical intensity distributions for each class in the image. In order to place a pixel in the proper class, LADA considers the intensity at that pixel and the distribution of intensities in local (nearby) pixels. This paper presents the use of LADA to provide, with statistical uncertainties, the positions and shapes of features within ultrafast imagesmore » of shock waves. We demonstrate the ability to locate image features including crystals, density changes associated with shock waves, and material jetting caused by shock waves. This algorithm can analyze images that exhibit a wide range of physical phenomena because it does not rely on comparison to a model. LADA enables analysis of images from shock physics with statistical rigor independent of underlying models or simulations.« less
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data.
Carmichael, Owen; Sakhanenko, Lyudmila
2015-05-15
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way.
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data
Carmichael, Owen; Sakhanenko, Lyudmila
2015-01-01
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way. PMID:25937674
Waites, Anthony B; Mannfolk, Peter; Shaw, Marnie E; Olsrud, Johan; Jackson, Graeme D
2007-02-01
Clinical functional magnetic resonance imaging (fMRI) occasionally fails to detect significant activation, often due to variability in task performance. The present study seeks to test whether a more flexible statistical analysis can better detect activation, by accounting for variance associated with variable compliance to the task over time. Experimental results and simulated data both confirm that even at 80% compliance to the task, such a flexible model outperforms standard statistical analysis when assessed using the extent of activation (experimental data), goodness of fit (experimental data), and area under the operator characteristic curve (simulated data). Furthermore, retrospective examination of 14 clinical fMRI examinations reveals that in patients where the standard statistical approach yields activation, there is a measurable gain in model performance in adopting the flexible statistical model, with little or no penalty in lost sensitivity. This indicates that a flexible model should be considered, particularly for clinical patients who may have difficulty complying fully with the study task.
Enhancing image classification models with multi-modal biomarkers
NASA Astrophysics Data System (ADS)
Caban, Jesus J.; Liao, David; Yao, Jianhua; Mollura, Daniel J.; Gochuico, Bernadette; Yoo, Terry
2011-03-01
Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose, quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at providing quantitative measurements and assisting physicians during the decision-making process. As the need for more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged. In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40, are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.
Goumeidane, Aicha Baya; Nacereddine, Nafaa; Khamadja, Mohammed
2015-01-01
A perfect knowledge of a defect shape is determinant for the analysis step in automatic radiographic inspection. Image segmentation is carried out on radiographic images and extract defects indications. This paper deals with weld defect delineation in radiographic images. The proposed method is based on a new statistics-based explicit active contour. An association of local and global modeling of the image pixels intensities is used to push the model to the desired boundaries. Furthermore, other strategies are proposed to accelerate its evolution and make the convergence speed depending only on the defect size as selecting a band around the active contour curve. The experimental results are very promising, since experiments on synthetic and radiographic images show the ability of the proposed model to extract a piece-wise homogenous object from very inhomogeneous background, even in a bad quality image.
A joint source-channel distortion model for JPEG compressed images.
Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C
2006-06-01
The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.
NASA Astrophysics Data System (ADS)
Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang
2014-11-01
The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.
Analysis of Variance in Statistical Image Processing
NASA Astrophysics Data System (ADS)
Kurz, Ludwik; Hafed Benteftifa, M.
1997-04-01
A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.
Gibson, Eli; Fenster, Aaron; Ward, Aaron D
2013-10-01
Novel imaging modalities are pushing the boundaries of what is possible in medical imaging, but their signal properties are not always well understood. The evaluation of these novel imaging modalities is critical to achieving their research and clinical potential. Image registration of novel modalities to accepted reference standard modalities is an important part of characterizing the modalities and elucidating the effect of underlying focal disease on the imaging signal. The strengths of the conclusions drawn from these analyses are limited by statistical power. Based on the observation that in this context, statistical power depends in part on uncertainty arising from registration error, we derive a power calculation formula relating registration error, number of subjects, and the minimum detectable difference between normal and pathologic regions on imaging, for an imaging validation study design that accommodates signal correlations within image regions. Monte Carlo simulations were used to evaluate the derived models and test the strength of their assumptions, showing that the model yielded predictions of the power, the number of subjects, and the minimum detectable difference of simulated experiments accurate to within a maximum error of 1% when the assumptions of the derivation were met, and characterizing sensitivities of the model to violations of the assumptions. The use of these formulae is illustrated through a calculation of the number of subjects required for a case study, modeled closely after a prostate cancer imaging validation study currently taking place at our institution. The power calculation formulae address three central questions in the design of imaging validation studies: (1) What is the maximum acceptable registration error? (2) How many subjects are needed? (3) What is the minimum detectable difference between normal and pathologic image regions? Copyright © 2013 Elsevier B.V. All rights reserved.
Feng, Chao-Hui; Makino, Yoshio; Yoshimura, Masatoshi; Thuyet, Dang Quoc; García-Martín, Juan Francisco
2018-02-01
The potential of hyperspectral imaging with wavelengths of 380 to 1000 nm was used to determine the pH of cooked sausages after different storage conditions (4 °C for 1 d, 35 °C for 1, 3, and 5 d). The mean spectra of the sausages were extracted from the hyperspectral images and partial least squares regression (PLSR) model was developed to relate spectral profiles with the pH of the cooked sausages. Eleven important wavelengths were selected based on the regression coefficient values. The PLSR model established using the optimal wavelengths showed good precision being the prediction coefficient of determination (R p 2 ) 0.909 and the root mean square error of prediction 0.035. The prediction map for illustrating pH indices in sausages was for the first time developed by R statistics. The overall results suggested that hyperspectral imaging combined with PLSR and R statistics are capable to quantify and visualize the sausages pH evolution under different storage conditions. In this paper, hyperspectral imaging is for the first time used to detect pH in cooked sausages using R statistics, which provides another useful information for the researchers who do not have the access to Matlab. Eleven optimal wavelengths were successfully selected, which were used for simplifying the PLSR model established based on the full wavelengths. This simplified model achieved a high R p 2 (0.909) and a low root mean square error of prediction (0.035), which can be useful for the design of multispectral imaging systems. © 2017 Institute of Food Technologists®.
Summary Statistics for Fun Dough Data Acquired at LLNL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kallman, J S; Morales, K E; Whipple, R E
Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a Play Dough{trademark}-like product, Fun Dough{trademark}, designated as PD. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2100 LMHU{sub D} at 100kVp to a low of about 1100 LMHU{sub D} at 300kVp. The standard deviation of each measurement is around 1% of the mean. The entropy covers the range from 3.9 to 4.6. Ordinarily, we would model the LAC ofmore » the material and compare the modeled values to the measured values. In this case, however, we did not have the composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 8.5. LLNL prepared about 50mL of the Fun Dough{trademark} in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. Still, layers can plainly be seen in the reconstructed images, indicating that the bulk density of the material in the container is affected by voids and bubbles. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference between the initial image and that same image offset by one voxel horizontally, parallel to the rows of the x-ray detector array.) The statistics of the initial image of LAC values are called 'first order statistics;' those of the gradient image, 'second order statistics.'« less
Ghodrati, Masoud; Ghodousi, Mahrad; Yoonessi, Ali
2016-01-01
Humans are fast and accurate in categorizing complex natural images. It is, however, unclear what features of visual information are exploited by brain to perceive the images with such speed and accuracy. It has been shown that low-level contrast statistics of natural scenes can explain the variance of amplitude of event-related potentials (ERP) in response to rapidly presented images. In this study, we investigated the effect of these statistics on frequency content of ERPs. We recorded ERPs from human subjects, while they viewed natural images each presented for 70 ms. Our results showed that Weibull contrast statistics, as a biologically plausible model, explained the variance of ERPs the best, compared to other image statistics that we assessed. Our time-frequency analysis revealed a significant correlation between these statistics and ERPs' power within theta frequency band (~3-7 Hz). This is interesting, as theta band is believed to be involved in context updating and semantic encoding. This correlation became significant at ~110 ms after stimulus onset, and peaked at 138 ms. Our results show that not only the amplitude but also the frequency of neural responses can be modulated with low-level contrast statistics of natural images and highlights their potential role in scene perception.
Ghodrati, Masoud; Ghodousi, Mahrad; Yoonessi, Ali
2016-01-01
Humans are fast and accurate in categorizing complex natural images. It is, however, unclear what features of visual information are exploited by brain to perceive the images with such speed and accuracy. It has been shown that low-level contrast statistics of natural scenes can explain the variance of amplitude of event-related potentials (ERP) in response to rapidly presented images. In this study, we investigated the effect of these statistics on frequency content of ERPs. We recorded ERPs from human subjects, while they viewed natural images each presented for 70 ms. Our results showed that Weibull contrast statistics, as a biologically plausible model, explained the variance of ERPs the best, compared to other image statistics that we assessed. Our time-frequency analysis revealed a significant correlation between these statistics and ERPs' power within theta frequency band (~3–7 Hz). This is interesting, as theta band is believed to be involved in context updating and semantic encoding. This correlation became significant at ~110 ms after stimulus onset, and peaked at 138 ms. Our results show that not only the amplitude but also the frequency of neural responses can be modulated with low-level contrast statistics of natural images and highlights their potential role in scene perception. PMID:28018197
Probabilistic Modeling and Visualization of the Flexibility in Morphable Models
NASA Astrophysics Data System (ADS)
Lüthi, M.; Albrecht, T.; Vetter, T.
Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.
Loxley, P N
2017-10-01
The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.
Classical Statistics and Statistical Learning in Imaging Neuroscience
Bzdok, Danilo
2017-01-01
Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques. PMID:29056896
Using statistical deformable models to reconstruct vocal tract shape from magnetic resonance images.
Vasconcelos, M J M; Rua Ventura, S M; Freitas, D R S; Tavares, J M R S
2010-10-01
The mechanisms involved in speech production are complex and have thus been subject to growing attention by the scientific community. It has been demonstrated that magnetic resonance imaging (MRI) is a powerful means in the understanding of the morphology of the vocal tract. Over the last few years, statistical deformable models have been successfully used to identify and characterize bones and organs in medical images and point distribution models (PDMs) have gained particular relevance. In this work, the suitability of these models has been studied to characterize and further reconstruct the shape of the vocal tract in the articulation of Portuguese European (EP) speech sounds, one of the most spoken languages worldwide, with the aid of MR images. Therefore, a PDM has been built from a set of MR images acquired during the artificially sustained articulation of 25 EP speech sounds. Following this, the capacity of this statistical model to characterize the shape deformation of the vocal tract during the production of sounds was analysed. Next, the model was used to reconstruct five EP oral vowels and the EP fricative consonants. As far as a study on speech production is concerned, this study is considered to be the first approach to characterize and reconstruct the vocal tract shape from MR images by using PDMs. In addition, the findings achieved permit one to conclude that this modelling technique compels an enhanced understanding of the dynamic speech events involved in sustained articulations based on MRI, which are of particular interest for speech rehabilitation and simulation.
Statistical Inference for Porous Materials using Persistent Homology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moon, Chul; Heath, Jason E.; Mitchell, Scott A.
2017-12-01
We propose a porous materials analysis pipeline using persistent homology. We rst compute persistent homology of binarized 3D images of sampled material subvolumes. For each image we compute sets of homology intervals, which are represented as summary graphics called persistence diagrams. We convert persistence diagrams into image vectors in order to analyze the similarity of the homology of the material images using the mature tools for image analysis. Each image is treated as a vector and we compute its principal components to extract features. We t a statistical model using the loadings of principal components to estimate material porosity, permeability,more » anisotropy, and tortuosity. We also propose an adaptive version of the structural similarity index (SSIM), a similarity metric for images, as a measure to determine the statistical representative elementary volumes (sREV) for persistence homology. Thus we provide a capability for making a statistical inference of the uid ow and transport properties of porous materials based on their geometry and connectivity.« less
Stout, David B.; Chatziioannou, Arion F.
2012-01-01
Micro-CT is widely used in preclinical studies of small animals. Due to the low soft-tissue contrast in typical studies, segmentation of soft tissue organs from noncontrast enhanced micro-CT images is a challenging problem. Here, we propose an atlas-based approach for estimating the major organs in mouse micro-CT images. A statistical atlas of major trunk organs was constructed based on 45 training subjects. The statistical shape model technique was used to include inter-subject anatomical variations. The shape correlations between different organs were described using a conditional Gaussian model. For registration, first the high-contrast organs in micro-CT images were registered by fitting the statistical shape model, while the low-contrast organs were subsequently estimated from the high-contrast organs using the conditional Gaussian model. The registration accuracy was validated based on 23 noncontrast-enhanced and 45 contrast-enhanced micro-CT images. Three different accuracy metrics (Dice coefficient, organ volume recovery coefficient, and surface distance) were used for evaluation. The Dice coefficients vary from 0.45 ± 0.18 for the spleen to 0.90 ± 0.02 for the lungs, the volume recovery coefficients vary from for the liver to 1.30 ± 0.75 for the spleen, the surface distances vary from 0.18 ± 0.01 mm for the lungs to 0.72 ± 0.42 mm for the spleen. The registration accuracy of the statistical atlas was compared with two publicly available single-subject mouse atlases, i.e., the MOBY phantom and the DIGIMOUSE atlas, and the results proved that the statistical atlas is more accurate than the single atlases. To evaluate the influence of the training subject size, different numbers of training subjects were used for atlas construction and registration. The results showed an improvement of the registration accuracy when more training subjects were used for the atlas construction. The statistical atlas-based registration was also compared with the thin-plate spline based deformable registration, commonly used in mouse atlas registration. The results revealed that the statistical atlas has the advantage of improving the estimation of low-contrast organs. PMID:21859613
Non-rigid image registration using a statistical spline deformation model.
Loeckx, Dirk; Maes, Frederik; Vandermeulen, Dirk; Suetens, Paul
2003-07-01
We propose a statistical spline deformation model (SSDM) as a method to solve non-rigid image registration. Within this model, the deformation is expressed using a statistically trained B-spline deformation mesh. The model is trained by principal component analysis of a training set. This approach allows to reduce the number of degrees of freedom needed for non-rigid registration by only retaining the most significant modes of variation observed in the training set. User-defined transformation components, like affine modes, are merged with the principal components into a unified framework. Optimization proceeds along the transformation components rather then along the individual spline coefficients. The concept of SSDM's is applied to the temporal registration of thorax CR-images using pattern intensity as the registration measure. Our results show that, using 30 training pairs, a reduction of 33% is possible in the number of degrees of freedom without deterioration of the result. The same accuracy as without SSDM's is still achieved after a reduction up to 66% of the degrees of freedom.
NASA Astrophysics Data System (ADS)
Han, Hao; Zhang, Hao; Wei, Xinzhou; Moore, William; Liang, Zhengrong
2016-03-01
In this paper, we proposed a low-dose computed tomography (LdCT) image reconstruction method with the help of prior knowledge learning from previous high-quality or normal-dose CT (NdCT) scans. The well-established statistical penalized weighted least squares (PWLS) algorithm was adopted for image reconstruction, where the penalty term was formulated by a texture-based Gaussian Markov random field (gMRF) model. The NdCT scan was firstly segmented into different tissue types by a feature vector quantization (FVQ) approach. Then for each tissue type, a set of tissue-specific coefficients for the gMRF penalty was statistically learnt from the NdCT image via multiple-linear regression analysis. We also proposed a scheme to adaptively select the order of gMRF model for coefficients prediction. The tissue-specific gMRF patterns learnt from the NdCT image were finally used to form an adaptive MRF penalty for the PWLS reconstruction of LdCT image. The proposed texture-adaptive PWLS image reconstruction algorithm was shown to be more effective to preserve image textures than the conventional PWLS image reconstruction algorithm, and we further demonstrated the gain of high-order MRF modeling for texture-preserved LdCT PWLS image reconstruction.
NASA Astrophysics Data System (ADS)
Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin
2016-03-01
How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.
Performance characteristics of a visual-search human-model observer with sparse PET image data
NASA Astrophysics Data System (ADS)
Gifford, Howard C.
2012-02-01
As predictors of human performance in detection-localization tasks, statistical model observers can have problems with tasks that are primarily limited by target contrast or structural noise. Model observers with a visual-search (VS) framework may provide a more reliable alternative. This framework provides for an initial holistic search that identifies suspicious locations for analysis by a statistical observer. A basic VS observer for emission tomography focuses on hot "blobs" in an image and uses a channelized nonprewhitening (CNPW) observer for analysis. In [1], we investigated this model for a contrast-limited task with SPECT images; herein, a statisticalnoise limited task involving PET images is considered. An LROC study used 2D image slices with liver, lung and soft-tissue tumors. Human and model observers read the images in coronal, sagittal and transverse display formats. The study thus measured the detectability of tumors in a given organ as a function of display format. The model observers were applied under several task variants that tested their response to structural noise both at the organ boundaries alone and over the organs as a whole. As measured by correlation with the human data, the VS observer outperformed the CNPW scanning observer.
Blind image quality assessment without training on human opinion scores
NASA Astrophysics Data System (ADS)
Mittal, Anish; Soundararajan, Rajiv; Muralidhar, Gautam S.; Bovik, Alan C.; Ghosh, Joydeep
2013-03-01
We propose a family of image quality assessment (IQA) models based on natural scene statistics (NSS), that can predict the subjective quality of a distorted image without reference to a corresponding distortionless image, and without any training results on human opinion scores of distorted images. These `completely blind' models compete well with standard non-blind image quality indices in terms of subjective predictive performance when tested on the large publicly available `LIVE' Image Quality database.
Patch-Based Generative Shape Model and MDL Model Selection for Statistical Analysis of Archipelagos
NASA Astrophysics Data System (ADS)
Ganz, Melanie; Nielsen, Mads; Brandt, Sami
We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation of calcifications, where the area overlap with the ground truth shapes improved significantly compared to the case where the prior was not used.
A statistical pixel intensity model for segmentation of confocal laser scanning microscopy images.
Calapez, Alexandre; Rosa, Agostinho
2010-09-01
Confocal laser scanning microscopy (CLSM) has been widely used in the life sciences for the characterization of cell processes because it allows the recording of the distribution of fluorescence-tagged macromolecules on a section of the living cell. It is in fact the cornerstone of many molecular transport and interaction quantification techniques where the identification of regions of interest through image segmentation is usually a required step. In many situations, because of the complexity of the recorded cellular structures or because of the amounts of data involved, image segmentation either is too difficult or inefficient to be done by hand and automated segmentation procedures have to be considered. Given the nature of CLSM images, statistical segmentation methodologies appear as natural candidates. In this work we propose a model to be used for statistical unsupervised CLSM image segmentation. The model is derived from the CLSM image formation mechanics and its performance is compared to the existing alternatives. Results show that it provides a much better description of the data on classes characterized by their mean intensity, making it suitable not only for segmentation methodologies with known number of classes but also for use with schemes aiming at the estimation of the number of classes through the application of cluster selection criteria.
3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles
NASA Astrophysics Data System (ADS)
Doerschuk, Peter C.; Johnson, John E.
2000-11-01
A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.
Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.
McIntosh, Chris; Hamarneh, Ghassan
2012-01-01
We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.
NASA Astrophysics Data System (ADS)
Reyes López, Misael; Arámbula Cosío, Fernando
2017-11-01
The cerebellum is an important structure to determine the gestational age of the fetus, moreover most of the abnormalities it presents are related to growth disorders. In this work, we present the results of the segmentation of the fetal cerebellum applying statistical shape and appearance models. Both models were tested on ultrasound images of the fetal brain taken from 23 pregnant women, between 18 and 24 gestational weeks. The accuracy results obtained on 11 ultrasound images show a mean Hausdorff distance of 6.08 mm between the manual segmentation and the segmentation using active shape model, and a mean Hausdorff distance of 7.54 mm between the manual segmentation and the segmentation using active appearance model. The reported results demonstrate that the active shape model is more robust in the segmentation of the fetal cerebellum in ultrasound images.
Pulse pileup statistics for energy discriminating photon counting x-ray detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Adam S.; Harrison, Daniel; Lobastov, Vladimir
Purpose: Energy discriminating photon counting x-ray detectors can be subject to a wide range of flux rates if applied in clinical settings. Even when the incident rate is a small fraction of the detector's maximum periodic rate N{sub 0}, pulse pileup leads to count rate losses and spectral distortion. Although the deterministic effects can be corrected, the detrimental effect of pileup on image noise is not well understood and may limit the performance of photon counting systems. Therefore, the authors devise a method to determine the detector count statistics and imaging performance. Methods: The detector count statistics are derived analyticallymore » for an idealized pileup model with delta pulses of a nonparalyzable detector. These statistics are then used to compute the performance (e.g., contrast-to-noise ratio) for both single material and material decomposition contrast detection tasks via the Cramer-Rao lower bound (CRLB) as a function of the detector input count rate. With more realistic unipolar and bipolar pulse pileup models of a nonparalyzable detector, the imaging task performance is determined by Monte Carlo simulations and also approximated by a multinomial method based solely on the mean detected output spectrum. Photon counting performance at different count rates is compared with ideal energy integration, which is unaffected by count rate. Results: The authors found that an ideal photon counting detector with perfect energy resolution outperforms energy integration for our contrast detection tasks, but when the input count rate exceeds 20%N{sub 0}, many of these benefits disappear. The benefit with iodine contrast falls rapidly with increased count rate while water contrast is not as sensitive to count rates. The performance with a delta pulse model is overoptimistic when compared to the more realistic bipolar pulse model. The multinomial approximation predicts imaging performance very close to the prediction from Monte Carlo simulations. The monoenergetic image with maximum contrast-to-noise ratio from dual energy imaging with ideal photon counting is only slightly better than with dual kVp energy integration, and with a bipolar pulse model, energy integration outperforms photon counting for this particular metric because of the count rate losses. However, the material resolving capability of photon counting can be superior to energy integration with dual kVp even in the presence of pileup because of the energy information available to photon counting. Conclusions: A computationally efficient multinomial approximation of the count statistics that is based on the mean output spectrum can accurately predict imaging performance. This enables photon counting system designers to directly relate the effect of pileup to its impact on imaging statistics and how to best take advantage of the benefits of energy discriminating photon counting detectors, such as material separation with spectral imaging.« less
Vasconcelos, Maria J M; Ventura, Sandra M R; Freitas, Diamantino R S; Tavares, João Manuel R S
2012-03-01
The morphological and dynamic characterisation of the vocal tract during speech production has been gaining greater attention due to the motivation of the latest improvements in magnetic resonance (MR) imaging; namely, with the use of higher magnetic fields, such as 3.0 Tesla. In this work, the automatic study of the vocal tract from 3.0 Tesla MR images was assessed through the application of statistical deformable models. Therefore, the primary goal focused on the analysis of the shape of the vocal tract during the articulation of European Portuguese sounds, followed by the evaluation of the results concerning the automatic segmentation, i.e. identification of the vocal tract in new MR images. In what concerns speech production, this is the first attempt to automatically characterise and reconstruct the vocal tract shape of 3.0 Tesla MR images by using deformable models; particularly, by using active and appearance shape models. The achieved results clearly evidence the adequacy and advantage of the automatic analysis of the 3.0 Tesla MR images of these deformable models in order to extract the vocal tract shape and assess the involved articulatory movements. These achievements are mostly required, for example, for a better knowledge of speech production, mainly of patients suffering from articulatory disorders, and to build enhanced speech synthesizer models.
Probabilistic registration of an unbiased statistical shape model to ultrasound images of the spine
NASA Astrophysics Data System (ADS)
Rasoulian, Abtin; Rohling, Robert N.; Abolmaesumi, Purang
2012-02-01
The placement of an epidural needle is among the most difficult regional anesthetic techniques. Ultrasound has been proposed to improve success of placement. However, it has not become the standard-of-care because of limitations in the depictions and interpretation of the key anatomical features. We propose to augment the ultrasound images with a registered statistical shape model of the spine to aid interpretation. The model is created with a novel deformable group-wise registration method which utilizes a probabilistic approach to register groups of point sets. The method is compared to a volume-based model building technique and it demonstrates better generalization and compactness. We instantiate and register the shape model to a spine surface probability map extracted from the ultrasound images. Validation is performed on human subjects. The achieved registration accuracy (2-4 mm) is sufficient to guide the choice of puncture site and trajectory of an epidural needle.
Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.
Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga
2015-10-01
The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.
Perceptual quality prediction on authentically distorted images using a bag of features approach
Ghadiyaram, Deepti; Bovik, Alan C.
2017-01-01
Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417
VoxelStats: A MATLAB Package for Multi-Modal Voxel-Wise Brain Image Analysis.
Mathotaarachchi, Sulantha; Wang, Seqian; Shin, Monica; Pascoal, Tharick A; Benedet, Andrea L; Kang, Min Su; Beaudry, Thomas; Fonov, Vladimir S; Gauthier, Serge; Labbe, Aurélie; Rosa-Neto, Pedro
2016-01-01
In healthy individuals, behavioral outcomes are highly associated with the variability on brain regional structure or neurochemical phenotypes. Similarly, in the context of neurodegenerative conditions, neuroimaging reveals that cognitive decline is linked to the magnitude of atrophy, neurochemical declines, or concentrations of abnormal protein aggregates across brain regions. However, modeling the effects of multiple regional abnormalities as determinants of cognitive decline at the voxel level remains largely unexplored by multimodal imaging research, given the high computational cost of estimating regression models for every single voxel from various imaging modalities. VoxelStats is a voxel-wise computational framework to overcome these computational limitations and to perform statistical operations on multiple scalar variables and imaging modalities at the voxel level. VoxelStats package has been developed in Matlab(®) and supports imaging formats such as Nifti-1, ANALYZE, and MINC v2. Prebuilt functions in VoxelStats enable the user to perform voxel-wise general and generalized linear models and mixed effect models with multiple volumetric covariates. Importantly, VoxelStats can recognize scalar values or image volumes as response variables and can accommodate volumetric statistical covariates as well as their interaction effects with other variables. Furthermore, this package includes built-in functionality to perform voxel-wise receiver operating characteristic analysis and paired and unpaired group contrast analysis. Validation of VoxelStats was conducted by comparing the linear regression functionality with existing toolboxes such as glim_image and RMINC. The validation results were identical to existing methods and the additional functionality was demonstrated by generating feature case assessments (t-statistics, odds ratio, and true positive rate maps). In summary, VoxelStats expands the current methods for multimodal imaging analysis by allowing the estimation of advanced regional association metrics at the voxel level.
O'Neill, William; Penn, Richard; Werner, Michael; Thomas, Justin
2015-06-01
Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible.
Proper Image Subtraction—Optimal Transient Detection, Photometry, and Hypothesis Testing
NASA Astrophysics Data System (ADS)
Zackay, Barak; Ofek, Eran O.; Gal-Yam, Avishay
2016-10-01
Transient detection and flux measurement via image subtraction stand at the base of time domain astronomy. Due to the varying seeing conditions, the image subtraction process is non-trivial, and existing solutions suffer from a variety of problems. Starting from basic statistical principles, we develop the optimal statistic for transient detection, flux measurement, and any image-difference hypothesis testing. We derive a closed-form statistic that: (1) is mathematically proven to be the optimal transient detection statistic in the limit of background-dominated noise, (2) is numerically stable, (3) for accurately registered, adequately sampled images, does not leave subtraction or deconvolution artifacts, (4) allows automatic transient detection to the theoretical sensitivity limit by providing credible detection significance, (5) has uncorrelated white noise, (6) is a sufficient statistic for any further statistical test on the difference image, and, in particular, allows us to distinguish particle hits and other image artifacts from real transients, (7) is symmetric to the exchange of the new and reference images, (8) is at least an order of magnitude faster to compute than some popular methods, and (9) is straightforward to implement. Furthermore, we present extensions of this method that make it resilient to registration errors, color-refraction errors, and any noise source that can be modeled. In addition, we show that the optimal way to prepare a reference image is the proper image coaddition presented in Zackay & Ofek. We demonstrate this method on simulated data and real observations from the PTF data release 2. We provide an implementation of this algorithm in MATLAB and Python.
Choi, Se Y; Ahn, Seung H; Choi, Jae D; Kim, Jung H; Lee, Byoung-Il; Kim, Jeong-In
2016-01-01
Objective: The purpose of this study was to compare CT image quality for evaluating urolithiasis using filtered back projection (FBP), statistical iterative reconstruction (IR) and knowledge-based iterative model reconstruction (IMR) according to various scan parameters and radiation doses. Methods: A 5 × 5 × 5 mm3 uric acid stone was placed in a physical human phantom at the level of the pelvis. 3 tube voltages (120, 100 and 80 kV) and 4 current–time products (100, 70, 30 and 15 mAs) were implemented in 12 scans. Each scan was reconstructed with FBP, statistical IR (Levels 5–7) and knowledge-based IMR (soft-tissue Levels 1–3). The radiation dose, objective image quality and signal-to-noise ratio (SNR) were evaluated, and subjective assessments were performed. Results: The effective doses ranged from 0.095 to 2.621 mSv. Knowledge-based IMR showed better objective image noise and SNR than did FBP and statistical IR. The subjective image noise of FBP was worse than that of statistical IR and knowledge-based IMR. The subjective assessment scores deteriorated after a break point of 100 kV and 30 mAs. Conclusion: At the setting of 100 kV and 30 mAs, the radiation dose can be decreased by approximately 84% while keeping the subjective image assessment. Advances in knowledge: Patients with urolithiasis can be evaluated with ultralow-dose non-enhanced CT using a knowledge-based IMR algorithm at a substantially reduced radiation dose with the imaging quality preserved, thereby minimizing the risks of radiation exposure while providing clinically relevant diagnostic benefits for patients. PMID:26577542
Multiscale hidden Markov models for photon-limited imaging
NASA Astrophysics Data System (ADS)
Nowak, Robert D.
1999-06-01
Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.
SEGMENTING CT PROSTATE IMAGES USING POPULATION AND PATIENT-SPECIFIC STATISTICS FOR RADIOTHERAPY.
Feng, Qianjin; Foskey, Mark; Tang, Songyuan; Chen, Wufan; Shen, Dinggang
2009-08-07
This paper presents a new deformable model using both population and patient-specific statistics to segment the prostate from CT images. There are two novelties in the proposed method. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than general intensity and gradient features, is used to characterize the image features. Second, an online training approach is used to build the shape statistics for accurately capturing intra-patient variation, which is more important than inter-patient variation for prostate segmentation in clinical radiotherapy. Experimental results show that the proposed method is robust and accurate, suitable for clinical application.
SEGMENTING CT PROSTATE IMAGES USING POPULATION AND PATIENT-SPECIFIC STATISTICS FOR RADIOTHERAPY
Feng, Qianjin; Foskey, Mark; Tang, Songyuan; Chen, Wufan; Shen, Dinggang
2010-01-01
This paper presents a new deformable model using both population and patient-specific statistics to segment the prostate from CT images. There are two novelties in the proposed method. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than general intensity and gradient features, is used to characterize the image features. Second, an online training approach is used to build the shape statistics for accurately capturing intra-patient variation, which is more important than inter-patient variation for prostate segmentation in clinical radiotherapy. Experimental results show that the proposed method is robust and accurate, suitable for clinical application. PMID:21197416
Cooper, Emily A.; Norcia, Anthony M.
2015-01-01
The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624
Hybrid statistics-simulations based method for atom-counting from ADF STEM images.
De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra
2017-06-01
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.
Cortical Surround Interactions and Perceptual Salience via Natural Scene Statistics
Coen-Cagli, Ruben; Dayan, Peter; Schwartz, Odelia
2012-01-01
Spatial context in images induces perceptual phenomena associated with salience and modulates the responses of neurons in primary visual cortex (V1). However, the computational and ecological principles underlying contextual effects are incompletely understood. We introduce a model of natural images that includes grouping and segmentation of neighboring features based on their joint statistics, and we interpret the firing rates of V1 neurons as performing optimal recognition in this model. We show that this leads to a substantial generalization of divisive normalization, a computation that is ubiquitous in many neural areas and systems. A main novelty in our model is that the influence of the context on a target stimulus is determined by their degree of statistical dependence. We optimized the parameters of the model on natural image patches, and then simulated neural and perceptual responses on stimuli used in classical experiments. The model reproduces some rich and complex response patterns observed in V1, such as the contrast dependence, orientation tuning and spatial asymmetry of surround suppression, while also allowing for surround facilitation under conditions of weak stimulation. It also mimics the perceptual salience produced by simple displays, and leads to readily testable predictions. Our results provide a principled account of orientation-based contextual modulation in early vision and its sensitivity to the homogeneity and spatial arrangement of inputs, and lends statistical support to the theory that V1 computes visual salience. PMID:22396635
Training models of anatomic shape variability
Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang
2008-01-01
Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919
NASA Astrophysics Data System (ADS)
Botter Martins, Samuel; Vallin Spina, Thiago; Yasuda, Clarissa; Falcão, Alexandre X.
2017-02-01
Statistical Atlases have played an important role towards automated medical image segmentation. However, a challenge has been to make the atlas more adaptable to possible errors in deformable registration of anomalous images, given that the body structures of interest for segmentation might present significant differences in shape and texture. Recently, deformable registration errors have been accounted by a method that locally translates the statistical atlas over the test image, after registration, and evaluates candidate objects from a delineation algorithm in order to choose the best one as final segmentation. In this paper, we improve its delineation algorithm and extend the model to be a multi-object statistical atlas, built from control images and adaptable to anomalous images, by incorporating a texture classifier. In order to provide a first proof of concept, we instantiate the new method for segmenting, object-by-object and all objects simultaneously, the left and right brain hemispheres, and the cerebellum, without the brainstem, and evaluate it on MRT1-images of epilepsy patients before and after brain surgery, which removed portions of the temporal lobe. The results show efficiency gain with statistically significant higher accuracy, using the mean Average Symmetric Surface Distance, with respect to the original approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Chen, H; Mutic, S
Purpose: A persistent challenge for the quality assessment of radiation therapy treatments (e.g. contouring accuracy) is the absence of the known, ground truth for patient data. Moreover, assessment results are often patient-dependent. Computer simulation studies utilizing numerical phantoms can be performed for quality assessment with a known ground truth. However, previously reported numerical phantoms do not include the statistical properties of inter-patient variations, as their models are based on only one patient. In addition, these models do not incorporate tumor data. In this study, a methodology was developed for generating numerical phantoms which encapsulate the statistical variations of patients withinmore » radiation therapy, including tumors. Methods: Based on previous work in contouring assessment, geometric attribute distribution (GAD) models were employed to model both the deterministic and stochastic properties of individual organs via principle component analysis. Using pre-existing radiation therapy contour data, the GAD models are trained to model the shape and centroid distributions of each organ. Then, organs with different shapes and positions can be generated by assigning statistically sound weights to the GAD model parameters. Organ contour data from 20 retrospective prostate patient cases were manually extracted and utilized to train the GAD models. As a demonstration, computer-simulated CT images of generated numerical phantoms were calculated and assessed subjectively and objectively for realism. Results: A cohort of numerical phantoms of the male human pelvis was generated. CT images were deemed realistic both subjectively and objectively in terms of image noise power spectrum. Conclusion: A methodology has been developed to generate realistic numerical anthropomorphic phantoms using pre-existing radiation therapy data. The GAD models guarantee that generated organs span the statistical distribution of observed radiation therapy patients, according to the training dataset. The methodology enables radiation therapy treatment assessment with multi-modality imaging and a known ground truth, and without patient-dependent bias.« less
Statistical model for speckle pattern optimization.
Su, Yong; Zhang, Qingchuan; Gao, Zeren
2017-11-27
Image registration is the key technique of optical metrologies such as digital image correlation (DIC), particle image velocimetry (PIV), and speckle metrology. Its performance depends critically on the quality of image pattern, and thus pattern optimization attracts extensive attention. In this article, a statistical model is built to optimize speckle patterns that are composed of randomly positioned speckles. It is found that the process of speckle pattern generation is essentially a filtered Poisson process. The dependence of measurement errors (including systematic errors, random errors, and overall errors) upon speckle pattern generation parameters is characterized analytically. By minimizing the errors, formulas of the optimal speckle radius are presented. Although the primary motivation is from the field of DIC, we believed that scholars in other optical measurement communities, such as PIV and speckle metrology, will benefit from these discussions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apte, A; Veeraraghavan, H; Oh, J
Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less
Widmann, Gerlig; Schullian, Peter; Gassner, Eva-Maria; Hoermann, Romed; Bale, Reto; Puelacher, Wolfgang
2015-03-01
OBJECTIVE. The purpose of this article is to evaluate 2D and 3D image quality of high-resolution ultralow-dose CT images of the craniofacial bone for navigated surgery using adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) in comparison with standard filtered backprojection (FBP). MATERIALS AND METHODS. A formalin-fixed human cadaver head was scanned using a clinical reference protocol at a CT dose index volume of 30.48 mGy and a series of five ultralow-dose protocols at 3.48, 2.19, 0.82, 0.44, and 0.22 mGy using FBP and ASIR at 50% (ASIR-50), ASIR at 100% (ASIR-100), and MBIR. Blinded 2D axial and 3D volume-rendered images were compared with each other by three readers using top-down scoring. Scores were analyzed per protocol or dose and reconstruction. All images were compared with the FBP reference at 30.48 mGy. A nonparametric Mann-Whitney U test was used. Statistical significance was set at p < 0.05. RESULTS. For 2D images, the FBP reference at 30.48 mGy did not statistically significantly differ from ASIR-100 at 3.48 mGy, ASIR-100 at 2.19 mGy, and MBIR at 0.82 mGy. MBIR at 2.19 and 3.48 mGy scored statistically significantly better than the FBP reference (p = 0.032 and 0.001, respectively). For 3D images, the FBP reference at 30.48 mGy did not statistically significantly differ from all reconstructions at 3.48 mGy; FBP and ASIR-100 at 2.19 mGy; FBP, ASIR-100, and MBIR at 0.82 mGy; MBIR at 0.44 mGy; and MBIR at 0.22 mGy. CONCLUSION. MBIR (2D and 3D) and ASIR-100 (2D) may significantly improve subjective image quality of ultralow-dose images and may allow more than 90% dose reductions.
NASA Astrophysics Data System (ADS)
Jahani, Nariman; Cohen, Eric; Hsieh, Meng-Kang; Weinstein, Susan P.; Pantalone, Lauren; Davatzikos, Christos; Kontos, Despina
2018-02-01
We examined the ability of DCE-MRI longitudinal features to give early prediction of recurrence-free survival (RFS) in women undergoing neoadjuvant chemotherapy for breast cancer, in a retrospective analysis of 106 women from the ISPY 1 cohort. These features were based on the voxel-wise changes seen in registered images taken before treatment and after the first round of chemotherapy. We computed the transformation field using a robust deformable image registration technique to match breast images from these two visits. Using the deformation field, parametric response maps (PRM) — a voxel-based feature analysis of longitudinal changes in images between visits — was computed for maps of four kinetic features (signal enhancement ratio, peak enhancement, and wash-in/wash-out slopes). A two-level discrete wavelet transform was applied to these PRMs to extract heterogeneity information about tumor change between visits. To estimate survival, a Cox proportional hazard model was applied with the C statistic as the measure of success in predicting RFS. The best PRM feature (as determined by C statistic in univariable analysis) was determined for each of the four kinetic features. The baseline model, incorporating functional tumor volume, age, race, and hormone response status, had a C statistic of 0.70 in predicting RFS. The model augmented with the four PRM features had a C statistic of 0.76. Thus, our results suggest that adding information on the texture of voxel-level changes in tumor kinetic response between registered images of first and second visits could improve early RFS prediction in breast cancer after neoadjuvant chemotherapy.
Model-based error diffusion for high fidelity lenticular screening.
Lau, Daniel; Smith, Trebor
2006-04-17
Digital halftoning is the process of converting a continuous-tone image into an arrangement of black and white dots for binary display devices such as digital ink-jet and electrophotographic printers. As printers are achieving print resolutions exceeding 1,200 dots per inch, it is becoming increasingly important for halftoning algorithms to consider the variations and interactions in the size and shape of printed dots between neighboring pixels. In the case of lenticular screening where statistically independent images are spatially multiplexed together, ignoring these variations and interactions, such as dot overlap, will result in poor lenticular image quality. To this end, we describe our use of model-based error-diffusion for the lenticular screening problem where statistical independence between component images is achieved by restricting the diffusion of error to only those pixels of the same component image where, in order to avoid instabilities, the proposed approach involves a novel error-clipping procedure.
Statistical reconstruction for cosmic ray muon tomography.
Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J
2007-08-01
Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.
Whole vertebral bone segmentation method with a statistical intensity-shape model based approach
NASA Astrophysics Data System (ADS)
Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer
2011-03-01
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.
Building generic anatomical models using virtual model cutting and iterative registration.
Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W
2010-02-08
Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately.
Multiple Point Statistics algorithm based on direct sampling and multi-resolution images
NASA Astrophysics Data System (ADS)
Julien, S.; Renard, P.; Chugunova, T.
2017-12-01
Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metoyer, Candace N.; Walsh, Stephen J.; Tardiff, Mark F.
2008-10-30
The detection and identification of weak gaseous plumes using thermal imaging data is complicated by many factors. These include variability due to atmosphere, ground and plume temperature, and background clutter. This paper presents an analysis of one formulation of the physics-based model that describes the at-sensor observed radiance. The motivating question for the analyses performed in this paper is as follows. Given a set of backgrounds, is there a way to predict the background over which the probability of detecting a given chemical will be the highest? Two statistics were developed to address this question. These statistics incorporate data frommore » the long-wave infrared band to predict the background over which chemical detectability will be the highest. These statistics can be computed prior to data collection. As a preliminary exploration into the predictive ability of these statistics, analyses were performed on synthetic hyperspectral images. Each image contained one chemical (either carbon tetrachloride or ammonia) spread across six distinct background types. The statistics were used to generate predictions for the background ranks. Then, the predicted ranks were compared to the empirical ranks obtained from the analyses of the synthetic images. For the simplified images under consideration, the predicted and empirical ranks showed a promising amount of agreement. One statistic accurately predicted the best and worst background for detection in all of the images. Future work may include explorations of more complicated plume ingredients, background types, and noise structures.« less
Fonseca, Carissa G; Backhaus, Michael; Bluemke, David A; Britten, Randall D; Chung, Jae Do; Cowan, Brett R; Dinov, Ivo D; Finn, J Paul; Hunter, Peter J; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Medrano-Gracia, Pau; Shivkumar, Kalyanam; Suinesiaputra, Avan; Tao, Wenchao; Young, Alistair A
2011-08-15
Integrative mathematical and statistical models of cardiac anatomy and physiology can play a vital role in understanding cardiac disease phenotype and planning therapeutic strategies. However, the accuracy and predictive power of such models is dependent upon the breadth and depth of noninvasive imaging datasets. The Cardiac Atlas Project (CAP) has established a large-scale database of cardiac imaging examinations and associated clinical data in order to develop a shareable, web-accessible, structural and functional atlas of the normal and pathological heart for clinical, research and educational purposes. A goal of CAP is to facilitate collaborative statistical analysis of regional heart shape and wall motion and characterize cardiac function among and within population groups. Three main open-source software components were developed: (i) a database with web-interface; (ii) a modeling client for 3D + time visualization and parametric description of shape and motion; and (iii) open data formats for semantic characterization of models and annotations. The database was implemented using a three-tier architecture utilizing MySQL, JBoss and Dcm4chee, in compliance with the DICOM standard to provide compatibility with existing clinical networks and devices. Parts of Dcm4chee were extended to access image specific attributes as search parameters. To date, approximately 3000 de-identified cardiac imaging examinations are available in the database. All software components developed by the CAP are open source and are freely available under the Mozilla Public License Version 1.1 (http://www.mozilla.org/MPL/MPL-1.1.txt). http://www.cardiacatlas.org a.young@auckland.ac.nz Supplementary data are available at Bioinformatics online.
Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.
Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L
2008-04-01
The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.
Kawata, Masaaki; Sato, Chikara
2007-06-01
In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.
Statistical modeling of 4D respiratory lung motion using diffeomorphic image registration.
Ehrhardt, Jan; Werner, René; Schmidt-Richberg, Alexander; Handels, Heinz
2011-02-01
Modeling of respiratory motion has become increasingly important in various applications of medical imaging (e.g., radiation therapy of lung cancer). Current modeling approaches are usually confined to intra-patient registration of 3D image data representing the individual patient's anatomy at different breathing phases. We propose an approach to generate a mean motion model of the lung based on thoracic 4D computed tomography (CT) data of different patients to extend the motion modeling capabilities. Our modeling process consists of three steps: an intra-subject registration to generate subject-specific motion models, the generation of an average shape and intensity atlas of the lung as anatomical reference frame, and the registration of the subject-specific motion models to the atlas in order to build a statistical 4D mean motion model (4D-MMM). Furthermore, we present methods to adapt the 4D mean motion model to a patient-specific lung geometry. In all steps, a symmetric diffeomorphic nonlinear intensity-based registration method was employed. The Log-Euclidean framework was used to compute statistics on the diffeomorphic transformations. The presented methods are then used to build a mean motion model of respiratory lung motion using thoracic 4D CT data sets of 17 patients. We evaluate the model by applying it for estimating respiratory motion of ten lung cancer patients. The prediction is evaluated with respect to landmark and tumor motion, and the quantitative analysis results in a mean target registration error (TRE) of 3.3 ±1.6 mm if lung dynamics are not impaired by large lung tumors or other lung disorders (e.g., emphysema). With regard to lung tumor motion, we show that prediction accuracy is independent of tumor size and tumor motion amplitude in the considered data set. However, tumors adhering to non-lung structures degrade local lung dynamics significantly and the model-based prediction accuracy is lower in these cases. The statistical respiratory motion model is capable of providing valuable prior knowledge in many fields of applications. We present two examples of possible applications in radiation therapy and image guided diagnosis.
Markov Random Fields, Stochastic Quantization and Image Analysis
1990-01-01
Markov random fields based on the lattice Z2 have been extensively used in image analysis in a Bayesian framework as a-priori models for the...of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis , Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory.
Multiresolution multiscale active mask segmentation of fluorescence microscope images
NASA Astrophysics Data System (ADS)
Srinivasa, Gowri; Fickus, Matthew; Kovačević, Jelena
2009-08-01
We propose an active mask segmentation framework that combines the advantages of statistical modeling, smoothing, speed and flexibility offered by the traditional methods of region-growing, multiscale, multiresolution and active contours respectively. At the crux of this framework is a paradigm shift from evolving contours in the continuous domain to evolving multiple masks in the discrete domain. Thus, the active mask framework is particularly suited to segment digital images. We demonstrate the use of the framework in practice through the segmentation of punctate patterns in fluorescence microscope images. Experiments reveal that statistical modeling helps the multiple masks converge from a random initial configuration to a meaningful one. This obviates the need for an involved initialization procedure germane to most of the traditional methods used to segment fluorescence microscope images. While we provide the mathematical details of the functions used to segment fluorescence microscope images, this is only an instantiation of the active mask framework. We suggest some other instantiations of the framework to segment different types of images.
Statistical iterative reconstruction to improve image quality for digital breast tomosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Shiyu, E-mail: shiyu.xu@gmail.com; Chen, Ying, E-mail: adachen@siu.edu; Lu, Jianping
2015-09-15
Purpose: Digital breast tomosynthesis (DBT) is a novel modality with the potential to improve early detection of breast cancer by providing three-dimensional (3D) imaging with a low radiation dose. 3D image reconstruction presents some challenges: cone-beam and flat-panel geometry, and highly incomplete sampling. A promising means to overcome these challenges is statistical iterative reconstruction (IR), since it provides the flexibility of accurate physics modeling and a general description of system geometry. The authors’ goal was to develop techniques for applying statistical IR to tomosynthesis imaging data. Methods: These techniques include the following: a physics model with a local voxel-pair basedmore » prior with flexible parameters to fine-tune image quality; a precomputed parameter λ in the prior, to remove data dependence and to achieve a uniform resolution property; an effective ray-driven technique to compute the forward and backprojection; and an oversampled, ray-driven method to perform high resolution reconstruction with a practical region-of-interest technique. To assess the performance of these techniques, the authors acquired phantom data on the stationary DBT prototype system. To solve the estimation problem, the authors proposed an optimization-transfer based algorithm framework that potentially allows fewer iterations to achieve an acceptably converged reconstruction. Results: IR improved the detectability of low-contrast and small microcalcifications, reduced cross-plane artifacts, improved spatial resolution, and lowered noise in reconstructed images. Conclusions: Although the computational load remains a significant challenge for practical development, the superior image quality provided by statistical IR, combined with advancing computational techniques, may bring benefits to screening, diagnostics, and intraoperative imaging in clinical applications.« less
Statistical estimation of femur micro-architecture using optimal shape and density predictors.
Lekadir, Karim; Hazrati-Marangalou, Javad; Hoogendoorn, Corné; Taylor, Zeike; van Rietbergen, Bert; Frangi, Alejandro F
2015-02-26
The personalization of trabecular micro-architecture has been recently shown to be important in patient-specific biomechanical models of the femur. However, high-resolution in vivo imaging of bone micro-architecture using existing modalities is still infeasible in practice due to the associated acquisition times, costs, and X-ray radiation exposure. In this study, we describe a statistical approach for the prediction of the femur micro-architecture based on the more easily extracted subject-specific bone shape and mineral density information. To this end, a training sample of ex vivo micro-CT images is used to learn the existing statistical relationships within the low and high resolution image data. More specifically, optimal bone shape and mineral density features are selected based on their predictive power and used within a partial least square regression model to estimate the unknown trabecular micro-architecture within the anatomical models of new subjects. The experimental results demonstrate the accuracy of the proposed approach, with average errors of 0.07 for both the degree of anisotropy and tensor norms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Robust estimation approach for blind denoising.
Rabie, Tamer
2005-11-01
This work develops a new robust statistical framework for blind image denoising. Robust statistics addresses the problem of estimation when the idealized assumptions about a system are occasionally violated. The contaminating noise in an image is considered as a violation of the assumption of spatial coherence of the image intensities and is treated as an outlier random variable. A denoised image is estimated by fitting a spatially coherent stationary image model to the available noisy data using a robust estimator-based regression method within an optimal-size adaptive window. The robust formulation aims at eliminating the noise outliers while preserving the edge structures in the restored image. Several examples demonstrating the effectiveness of this robust denoising technique are reported and a comparison with other standard denoising filters is presented.
High-order statistics of weber local descriptors for image representation.
Han, Xian-Hua; Chen, Yen-Wei; Xu, Gang
2015-06-01
Highly discriminant visual features play a key role in different image classification applications. This study aims to realize a method for extracting highly-discriminant features from images by exploring a robust local descriptor inspired by Weber's law. The investigated local descriptor is based on the fact that human perception for distinguishing a pattern depends not only on the absolute intensity of the stimulus but also on the relative variance of the stimulus. Therefore, we firstly transform the original stimulus (the images in our study) into a differential excitation-domain according to Weber's law, and then explore a local patch, called micro-Texton, in the transformed domain as Weber local descriptor (WLD). Furthermore, we propose to employ a parametric probability process to model the Weber local descriptors, and extract the higher-order statistics to the model parameters for image representation. The proposed strategy can adaptively characterize the WLD space using generative probability model, and then learn the parameters for better fitting the training space, which would lead to more discriminant representation for images. In order to validate the efficiency of the proposed strategy, we apply three different image classification applications including texture, food images and HEp-2 cell pattern recognition, which validates that our proposed strategy has advantages over the state-of-the-art approaches.
Modified-BRISQUE as no reference image quality assessment for structural MR images.
Chow, Li Sze; Rajagopal, Heshalini
2017-11-01
An effective and practical Image Quality Assessment (IQA) model is needed to assess the image quality produced from any new hardware or software in MRI. A highly competitive No Reference - IQA (NR - IQA) model called Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) initially designed for natural images were modified to evaluate structural MR images. The BRISQUE model measures the image quality by using the locally normalized luminance coefficients, which were used to calculate the image features. The modified-BRISQUE model trained a new regression model using MR image features and Difference Mean Opinion Score (DMOS) from 775 MR images. Two types of benchmarks: objective and subjective assessments were used as performance evaluators for both original and modified-BRISQUE models. There was a high correlation between the modified-BRISQUE with both benchmarks, and they were higher than those for the original BRISQUE. There was a significant percentage improvement in their correlation values. The modified-BRISQUE was statistically better than the original BRISQUE. The modified-BRISQUE model can accurately measure the image quality of MR images. It is a practical NR-IQA model for MR images without using reference images. Copyright © 2017 Elsevier Inc. All rights reserved.
Sitek, Arkadiusz
2016-12-21
The origin ensemble (OE) algorithm is a new method used for image reconstruction from nuclear tomographic data. The main advantage of this algorithm is the ease of implementation for complex tomographic models and the sound statistical theory. In this comment, the author provides the basics of the statistical interpretation of OE and gives suggestions for the improvement of the algorithm in the application to prompt gamma imaging as described in Polf et al (2015 Phys. Med. Biol. 60 7085).
NASA Astrophysics Data System (ADS)
Sitek, Arkadiusz
2016-12-01
The origin ensemble (OE) algorithm is a new method used for image reconstruction from nuclear tomographic data. The main advantage of this algorithm is the ease of implementation for complex tomographic models and the sound statistical theory. In this comment, the author provides the basics of the statistical interpretation of OE and gives suggestions for the improvement of the algorithm in the application to prompt gamma imaging as described in Polf et al (2015 Phys. Med. Biol. 60 7085).
Spatiotemporal models for the simulation of infrared backgrounds
NASA Astrophysics Data System (ADS)
Wilkes, Don M.; Cadzow, James A.; Peters, R. Alan, II; Li, Xingkang
1992-09-01
It is highly desirable for designers of automatic target recognizers (ATRs) to be able to test their algorithms on targets superimposed on a wide variety of background imagery. Background imagery in the infrared spectrum is expensive to gather from real sources, consequently, there is a need for accurate models for producing synthetic IR background imagery. We have developed a model for such imagery that will do the following: Given a real, infrared background image, generate another image, distinctly different from the one given, that has the same general visual characteristics as well as the first and second-order statistics of the original image. The proposed model consists of a finite impulse response (FIR) kernel convolved with an excitation function, and histogram modification applied to the final solution. A procedure for deriving the FIR kernel using a signal enhancement algorithm has been developed, and the histogram modification step is a simple memoryless nonlinear mapping that imposes the first order statistics of the original image onto the synthetic one, thus the overall model is a linear system cascaded with a memoryless nonlinearity. It has been found that the excitation function relates to the placement of features in the image, the FIR kernel controls the sharpness of the edges and the global spectrum of the image, and the histogram controls the basic coloration of the image. A drawback to this method of simulating IR backgrounds is that a database of actual background images must be collected in order to produce accurate FIR and histogram models. If this database must include images of all types of backgrounds obtained at all times of the day and all times of the year, the size of the database would be prohibitive. In this paper we propose improvements to the model described above that enable time-dependent modeling of the IR background. This approach can greatly reduce the number of actual IR backgrounds that are required to produce a sufficiently accurate mathematical model for synthesizing a similar IR background for different times of the day. Original and synthetic IR backgrounds will be presented. Previous research in simulating IR backgrounds was performed by Strenzwilk, et al., Botkin, et al., and Rapp. The most recent work of Strenzwilk, et al. was based on the use of one-dimensional ARMA models for synthesizing the images. Their results were able to retain the global statistical and spectral behavior of the original image, but the synthetic image was not visually very similar to the original. The research presented in this paper is the result of an attempt to improve upon their results, and represents a significant improvement in quality over previously obtained results.
Hashim, Mazlan
2015-01-01
This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning. PMID:25898919
Shahabi, Himan; Hashim, Mazlan
2015-04-22
This research presents the results of the GIS-based statistical models for generation of landslide susceptibility mapping using geographic information system (GIS) and remote-sensing data for Cameron Highlands area in Malaysia. Ten factors including slope, aspect, soil, lithology, NDVI, land cover, distance to drainage, precipitation, distance to fault, and distance to road were extracted from SAR data, SPOT 5 and WorldView-1 images. The relationships between the detected landslide locations and these ten related factors were identified by using GIS-based statistical models including analytical hierarchy process (AHP), weighted linear combination (WLC) and spatial multi-criteria evaluation (SMCE) models. The landslide inventory map which has a total of 92 landslide locations was created based on numerous resources such as digital aerial photographs, AIRSAR data, WorldView-1 images, and field surveys. Then, 80% of the landslide inventory was used for training the statistical models and the remaining 20% was used for validation purpose. The validation results using the Relative landslide density index (R-index) and Receiver operating characteristic (ROC) demonstrated that the SMCE model (accuracy is 96%) is better in prediction than AHP (accuracy is 91%) and WLC (accuracy is 89%) models. These landslide susceptibility maps would be useful for hazard mitigation purpose and regional planning.
Collagen morphology and texture analysis: from statistics to classification
Mostaço-Guidolin, Leila B.; Ko, Alex C.-T.; Wang, Fei; Xiang, Bo; Hewko, Mark; Tian, Ganghong; Major, Arkady; Shiomi, Masashi; Sowa, Michael G.
2013-01-01
In this study we present an image analysis methodology capable of quantifying morphological changes in tissue collagen fibril organization caused by pathological conditions. Texture analysis based on first-order statistics (FOS) and second-order statistics such as gray level co-occurrence matrix (GLCM) was explored to extract second-harmonic generation (SHG) image features that are associated with the structural and biochemical changes of tissue collagen networks. Based on these extracted quantitative parameters, multi-group classification of SHG images was performed. With combined FOS and GLCM texture values, we achieved reliable classification of SHG collagen images acquired from atherosclerosis arteries with >90% accuracy, sensitivity and specificity. The proposed methodology can be applied to a wide range of conditions involving collagen re-modeling, such as in skin disorders, different types of fibrosis and muscular-skeletal diseases affecting ligaments and cartilage. PMID:23846580
NASA Astrophysics Data System (ADS)
Boichuk, T. M.; Bachinskiy, V. T.; Vanchuliak, O. Ya.; Minzer, O. P.; Garazdiuk, M.; Motrich, A. V.
2014-08-01
This research presents the results of investigation of laser polarization fluorescence of biological layers (histological sections of the myocardium). The polarized structure of autofluorescence imaging layers of biological tissues was detected and investigated. Proposed the model of describing the formation of polarization inhomogeneous of autofluorescence imaging biological optically anisotropic layers. On this basis, analytically and experimentally tested to justify the method of laser polarimetry autofluorescent. Analyzed the effectiveness of this method in the postmortem diagnosis of infarction. The objective criteria (statistical moments) of differentiation of autofluorescent images of histological sections myocardium were defined. The operational characteristics (sensitivity, specificity, accuracy) of these technique were determined.
Scharfenberger, Christian; Wong, Alexander; Clausi, David A
2015-01-01
We propose a simple yet effective structure-guided statistical textural distinctiveness approach to salient region detection. Our method uses a multilayer approach to analyze the structural and textural characteristics of natural images as important features for salient region detection from a scale point of view. To represent the structural characteristics, we abstract the image using structured image elements and extract rotational-invariant neighborhood-based textural representations to characterize each element by an individual texture pattern. We then learn a set of representative texture atoms for sparse texture modeling and construct a statistical textural distinctiveness matrix to determine the distinctiveness between all representative texture atom pairs in each layer. Finally, we determine saliency maps for each layer based on the occurrence probability of the texture atoms and their respective statistical textural distinctiveness and fuse them to compute a final saliency map. Experimental results using four public data sets and a variety of performance evaluation metrics show that our approach provides promising results when compared with existing salient region detection approaches.
Penn, Richard; Werner, Michael; Thomas, Justin
2015-01-01
Background Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. Methods In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. Results We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Conclusions Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible. PMID:26029638
2017-12-01
satisfactory performance. We do not use statistical models, and we do not create patterns that require supervised learning. Our methodology is intended...statistical models, and we do not create patterns that require supervised learning. Our methodology is intended for use in personal digital image...THESIS MOTIVATION .........................................................................19 III. METHODOLOGY
Multi-region statistical shape model for cochlear implantation
NASA Astrophysics Data System (ADS)
Romera, Jordi; Kjer, H. Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel A.
2016-03-01
Statistical shape models are commonly used to analyze the variability between similar anatomical structures and their use is established as a tool for analysis and segmentation of medical images. However, using a global model to capture the variability of complex structures is not enough to achieve the best results. The complexity of a proper global model increases even more when the amount of data available is limited to a small number of datasets. Typically, the anatomical variability between structures is associated to the variability of their physiological regions. In this paper, a complete pipeline is proposed for building a multi-region statistical shape model to study the entire variability from locally identified physiological regions of the inner ear. The proposed model, which is based on an extension of the Point Distribution Model (PDM), is built for a training set of 17 high-resolution images (24.5 μm voxels) of the inner ear. The model is evaluated according to its generalization ability and specificity. The results are compared with the ones of a global model built directly using the standard PDM approach. The evaluation results suggest that better accuracy can be achieved using a regional modeling of the inner ear.
Compressed Sensing (CS) Imaging with Wide FOV and Dynamic Magnification
2011-03-14
Digital Micromirror Device (DMD) to implement the CS measurement patterns. The core component of the DMD is a 768(V)?1024(H) aluminum micromirror array...image has different curves and textures, thus has different statistical model parameters. The sampling 19 Table 2: Reconstruction of images in
Hultenmo, Maria; Caisander, Håkan; Mack, Karsten; Thilander-Klang, Anne
2016-06-01
The diagnostic image quality of 75 paediatric abdominal computed tomography (CT) examinations reconstructed with two different iterative reconstruction (IR) algorithms-adaptive statistical IR (ASiR™) and model-based IR (Veo™)-was compared. Axial and coronal images were reconstructed with 70 % ASiR with the Soft™ convolution kernel and with the Veo algorithm. The thickness of the reconstructed images was 2.5 or 5 mm depending on the scanning protocol used. Four radiologists graded the delineation of six abdominal structures and the diagnostic usefulness of the image quality. The Veo reconstruction significantly improved the visibility of most of the structures compared with ASiR in all subgroups of images. For coronal images, the Veo reconstruction resulted in significantly improved ratings of the diagnostic use of the image quality compared with the ASiR reconstruction. This was not seen for the axial images. The greatest improvement using Veo reconstruction was observed for the 2.5 mm coronal slices. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Statistical regularities of art images and natural scenes: spectra, sparseness and nonlinearities.
Graham, Daniel J; Field, David J
2007-01-01
Paintings are the product of a process that begins with ordinary vision in the natural world and ends with manipulation of pigments on canvas. Because artists must produce images that can be seen by a visual system that is thought to take advantage of statistical regularities in natural scenes, artists are likely to replicate many of these regularities in their painted art. We have tested this notion by computing basic statistical properties and modeled cell response properties for a large set of digitized paintings and natural scenes. We find that both representational and non-representational (abstract) paintings from our sample (124 images) show basic similarities to a sample of natural scenes in terms of their spatial frequency amplitude spectra, but the paintings and natural scenes show significantly different mean amplitude spectrum slopes. We also find that the intensity distributions of paintings show a lower skewness and sparseness than natural scenes. We account for this by considering the range of luminances found in the environment compared to the range available in the medium of paint. A painting's range is limited by the reflective properties of its materials. We argue that artists do not simply scale the intensity range down but use a compressive nonlinearity. In our studies, modeled retinal and cortical filter responses to the images were less sparse for the paintings than for the natural scenes. But when a compressive nonlinearity was applied to the images, both the paintings' sparseness and the modeled responses to the paintings showed the same or greater sparseness compared to the natural scenes. This suggests that artists achieve some degree of nonlinear compression in their paintings. Because paintings have captivated humans for millennia, finding basic statistical regularities in paintings' spatial structure could grant insights into the range of spatial patterns that humans find compelling.
Karimi, Davood; Samei, Golnoosh; Kesch, Claudia; Nir, Guy; Salcudean, Septimiu E
2018-05-15
Most of the existing convolutional neural network (CNN)-based medical image segmentation methods are based on methods that have originally been developed for segmentation of natural images. Therefore, they largely ignore the differences between the two domains, such as the smaller degree of variability in the shape and appearance of the target volume and the smaller amounts of training data in medical applications. We propose a CNN-based method for prostate segmentation in MRI that employs statistical shape models to address these issues. Our CNN predicts the location of the prostate center and the parameters of the shape model, which determine the position of prostate surface keypoints. To train such a large model for segmentation of 3D images using small data (1) we adopt a stage-wise training strategy by first training the network to predict the prostate center and subsequently adding modules for predicting the parameters of the shape model and prostate rotation, (2) we propose a data augmentation method whereby the training images and their prostate surface keypoints are deformed according to the displacements computed based on the shape model, and (3) we employ various regularization techniques. Our proposed method achieves a Dice score of 0.88, which is obtained by using both elastic-net and spectral dropout for regularization. Compared with a standard CNN-based method, our method shows significantly better segmentation performance on the prostate base and apex. Our experiments also show that data augmentation using the shape model significantly improves the segmentation results. Prior knowledge about the shape of the target organ can improve the performance of CNN-based segmentation methods, especially where image features are not sufficient for a precise segmentation. Statistical shape models can also be employed to synthesize additional training data that can ease the training of large CNNs.
NASA Technical Reports Server (NTRS)
Xiang, Xuwu; Smith, Eric A.; Tripoli, Gregory J.
1992-01-01
A hybrid statistical-physical retrieval scheme is explored which combines a statistical approach with an approach based on the development of cloud-radiation models designed to simulate precipitating atmospheres. The algorithm employs the detailed microphysical information from a cloud model as input to a radiative transfer model which generates a cloud-radiation model database. Statistical procedures are then invoked to objectively generate an initial guess composite profile data set from the database. The retrieval algorithm has been tested for a tropical typhoon case using Special Sensor Microwave/Imager (SSM/I) data and has shown satisfactory results.
NASA Technical Reports Server (NTRS)
Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.
1981-01-01
The initial phase of a program to determine the best interpretation strategy and sensor configuration for a radar remote sensing system for geologic applications is discussed. In this phase, terrain modeling and radar image simulation were used to perform parametric sensitivity studies. A relatively simple computer-generated terrain model is presented, and the data base, backscatter file, and transfer function for digital image simulation are described. Sets of images are presented that simulate the results obtained with an X-band radar from an altitude of 800 km and at three different terrain-illumination angles. The simulations include power maps, slant-range images, ground-range images, and ground-range images with statistical noise incorporated. It is concluded that digital image simulation and computer modeling provide cost-effective methods for evaluating terrain variations and sensor parameter changes, for predicting results, and for defining optimum sensor parameters.
An Example-Based Brain MRI Simulation Framework.
He, Qing; Roy, Snehashis; Jog, Amod; Pham, Dzung L
2015-02-21
The simulation of magnetic resonance (MR) images plays an important role in the validation of image analysis algorithms such as image segmentation, due to lack of sufficient ground truth in real MR images. Previous work on MRI simulation has focused on explicitly modeling the MR image formation process. However, because of the overwhelming complexity of MR acquisition these simulations must involve simplifications and approximations that can result in visually unrealistic simulated images. In this work, we describe an example-based simulation framework, which uses an "atlas" consisting of an MR image and its anatomical models derived from the hard segmentation. The relationships between the MR image intensities and its anatomical models are learned using a patch-based regression that implicitly models the physics of the MR image formation. Given the anatomical models of a new brain, a new MR image can be simulated using the learned regression. This approach has been extended to also simulate intensity inhomogeneity artifacts based on the statistical model of training data. Results show that the example based MRI simulation method is capable of simulating different image contrasts and is robust to different choices of atlas. The simulated images resemble real MR images more than simulations produced by a physics-based model.
SAR Speckle Noise Reduction Using Wiener Filter
NASA Technical Reports Server (NTRS)
Joo, T. H.; Held, D. N.
1983-01-01
Synthetic aperture radar (SAR) images are degraded by speckle. A multiplicative speckle noise model for SAR images is presented. Using this model, a Wiener filter is derived by minimizing the mean-squared error using the known speckle statistics. Implementation of the Wiener filter is discussed and experimental results are presented. Finally, possible improvements to this method are explored.
Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation
1990-05-01
process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE
Fusion of Local Statistical Parameters for Buried Underwater Mine Detection in Sonar Imaging
NASA Astrophysics Data System (ADS)
Maussang, F.; Rombaut, M.; Chanussot, J.; Hétet, A.; Amate, M.
2008-12-01
Detection of buried underwater objects, and especially mines, is a current crucial strategic task. Images provided by sonar systems allowing to penetrate in the sea floor, such as the synthetic aperture sonars (SASs), are of great interest for the detection and classification of such objects. However, the signal-to-noise ratio is fairly low and advanced information processing is required for a correct and reliable detection of the echoes generated by the objects. The detection method proposed in this paper is based on a data-fusion architecture using the belief theory. The input data of this architecture are local statistical characteristics extracted from SAS data corresponding to the first-, second-, third-, and fourth-order statistical properties of the sonar images, respectively. The interest of these parameters is derived from a statistical model of the sonar data. Numerical criteria are also proposed to estimate the detection performances and to validate the method.
NASA Astrophysics Data System (ADS)
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
How daylight influences high-order chromatic descriptors in natural images.
Ojeda, Juan; Nieves, Juan Luis; Romero, Javier
2017-07-01
Despite the global and local daylight changes naturally occurring in natural scenes, the human visual system usually adapts quite well to those changes, developing a stable color perception. Nevertheless, the influence of daylight in modeling natural image statistics is not fully understood and has received little attention. The aim of this work was to analyze the influence of daylight changes in different high-order chromatic descriptors (i.e., color volume, color gamut, and number of discernible colors) derived from 350 color images, which were rendered under 108 natural illuminants with Correlated Color Temperatures (CCT) from 2735 to 25,889 K. Results suggest that chromatic and luminance information is almost constant and does not depend on the CCT of the illuminant for values above 14,000 K. Nevertheless, differences between the red-green and blue-yellow image components were found below that CCT, with most of the statistical descriptors analyzed showing local extremes in the range 2950 K-6300 K. Uniform regions and areas of the images attracting observers' attention were also considered in this analysis and were characterized by their patchiness index and their saliency maps. Meanwhile, the results of the patchiness index do not show a clear dependence on CCT, and it is remarkable that a significant reduction in the number of discernible colors (58% on average) was found when the images were masked with their corresponding saliency maps. Our results suggest that chromatic diversity, as defined in terms of the discernible colors, can be strongly reduced when an observer scans a natural scene. These findings support the idea that a reduction in the number of discernible colors will guide visual saliency and attention. Whatever the modeling is mediating the neural representation of natural images, natural image statistics, it is clear that natural image statistics should take into account those local maxima and minima depending on the daylight illumination and the reduction of the number of discernible colors when salient regions are considered.
Defect detection of castings in radiography images using a robust statistical feature.
Zhao, Xinyue; He, Zaixing; Zhang, Shuyou
2014-01-01
One of the most commonly used optical methods for defect detection is radiographic inspection. Compared with methods that extract defects directly from the radiography image, model-based methods deal with the case of an object with complex structure well. However, detection of small low-contrast defects in nonuniformly illuminated images is still a major challenge for them. In this paper, we present a new method based on the grayscale arranging pairs (GAP) feature to detect casting defects in radiography images automatically. First, a model is built using pixel pairs with a stable intensity relationship based on the GAP feature from previously acquired images. Second, defects can be extracted by comparing the difference of intensity-difference signs between the input image and the model statistically. The robustness of the proposed method to noise and illumination variations has been verified on casting radioscopic images with defects. The experimental results showed that the average computation time of the proposed method in the testing stage is 28 ms per image on a computer with a Pentium Core 2 Duo 3.00 GHz processor. For the comparison, we also evaluated the performance of the proposed method as well as that of the mixture-of-Gaussian-based and crossing line profile methods. The proposed method achieved 2.7% and 2.0% false negative rates in the noise and illumination variation experiments, respectively.
Pulsed terahertz imaging of breast cancer in freshly excised murine tumors
NASA Astrophysics Data System (ADS)
Bowman, Tyler; Chavez, Tanny; Khan, Kamrul; Wu, Jingxian; Chakraborty, Avishek; Rajaram, Narasimhan; Bailey, Keith; El-Shenawee, Magda
2018-02-01
This paper investigates terahertz (THz) imaging and classification of freshly excised murine xenograft breast cancer tumors. These tumors are grown via injection of E0771 breast adenocarcinoma cells into the flank of mice maintained on high-fat diet. Within 1 h of excision, the tumor and adjacent tissues are imaged using a pulsed THz system in the reflection mode. The THz images are classified using a statistical Bayesian mixture model with unsupervised and supervised approaches. Correlation with digitized pathology images is conducted using classification images assigned by a modal class decision rule. The corresponding receiver operating characteristic curves are obtained based on the classification results. A total of 13 tumor samples obtained from 9 tumors are investigated. The results show good correlation of THz images with pathology results in all samples of cancer and fat tissues. For tumor samples of cancer, fat, and muscle tissues, THz images show reasonable correlation with pathology where the primary challenge lies in the overlapping dielectric properties of cancer and muscle tissues. The use of a supervised regression approach shows improvement in the classification images although not consistently in all tissue regions. Advancing THz imaging of breast tumors from mice and the development of accurate statistical models will ultimately progress the technique for the assessment of human breast tumor margins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapuyade-Lahorgue, J; Ruan, S; Li, H
Purpose: Multi-tracer PET imaging is getting more attention in radiotherapy by providing additional tumor volume information such as glucose and oxygenation. However, automatic PET-based tumor segmentation is still a very challenging problem. We propose a statistical fusion approach to joint segment the sub-area of tumors from the two tracers FDG and FMISO PET images. Methods: Non-standardized Gamma distributions are convenient to model intensity distributions in PET. As a serious correlation exists in multi-tracer PET images, we proposed a new fusion method based on copula which is capable to represent dependency between different tracers. The Hidden Markov Field (HMF) model ismore » used to represent spatial relationship between PET image voxels and statistical dynamics of intensities for each modality. Real PET images of five patients with FDG and FMISO are used to evaluate quantitatively and qualitatively our method. A comparison between individual and multi-tracer segmentations was conducted to show advantages of the proposed fusion method. Results: The segmentation results show that fusion with Gaussian copula can receive high Dice coefficient of 0.84 compared to that of 0.54 and 0.3 of monomodal segmentation results based on individual segmentation of FDG and FMISO PET images. In addition, high correlation coefficients (0.75 to 0.91) for the Gaussian copula for all five testing patients indicates the dependency between tumor regions in the multi-tracer PET images. Conclusion: This study shows that using multi-tracer PET imaging can efficiently improve the segmentation of tumor region where hypoxia and glucidic consumption are present at the same time. Introduction of copulas for modeling the dependency between two tracers can simultaneously take into account information from both tracers and deal with two pathological phenomena. Future work will be to consider other families of copula such as spherical and archimedian copulas, and to eliminate partial volume effect by considering dependency between neighboring voxels.« less
Statistical modeling, detection, and segmentation of stains in digitized fabric images
NASA Astrophysics Data System (ADS)
Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.
2007-02-01
This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.
Agner, Shannon C; Xu, Jun; Madabhushi, Anant
2013-03-01
Segmentation of breast lesions on dynamic contrast enhanced (DCE) magnetic resonance imaging (MRI) is the first step in lesion diagnosis in a computer-aided diagnosis framework. Because manual segmentation of such lesions is both time consuming and highly susceptible to human error and issues of reproducibility, an automated lesion segmentation method is highly desirable. Traditional automated image segmentation methods such as boundary-based active contour (AC) models require a strong gradient at the lesion boundary. Even when region-based terms are introduced to an AC model, grayscale image intensities often do not allow for clear definition of foreground and background region statistics. Thus, there is a need to find alternative image representations that might provide (1) strong gradients at the margin of the object of interest (OOI); and (2) larger separation between intensity distributions and region statistics for the foreground and background, which are necessary to halt evolution of the AC model upon reaching the border of the OOI. In this paper, the authors introduce a spectral embedding (SE) based AC (SEAC) for lesion segmentation on breast DCE-MRI. SE, a nonlinear dimensionality reduction scheme, is applied to the DCE time series in a voxelwise fashion to reduce several time point images to a single parametric image where every voxel is characterized by the three dominant eigenvectors. This parametric eigenvector image (PrEIm) representation allows for better capture of image region statistics and stronger gradients for use with a hybrid AC model, which is driven by both boundary and region information. They compare SEAC to ACs that employ fuzzy c-means (FCM) and principal component analysis (PCA) as alternative image representations. Segmentation performance was evaluated by boundary and region metrics as well as comparing lesion classification using morphological features from SEAC, PCA+AC, and FCM+AC. On a cohort of 50 breast DCE-MRI studies, PrEIm yielded overall better region and boundary-based statistics compared to the original DCE-MR image, FCM, and PCA based image representations. Additionally, SEAC outperformed a hybrid AC applied to both PCA and FCM image representations. Mean dice similarity coefficient (DSC) for SEAC was significantly better (DSC = 0.74 ± 0.21) than FCM+AC (DSC = 0.50 ± 0.32) and similar to PCA+AC (DSC = 0.73 ± 0.22). Boundary-based metrics of mean absolute difference and Hausdorff distance followed the same trends. Of the automated segmentation methods, breast lesion classification based on morphologic features derived from SEAC segmentation using a support vector machine classifier also performed better (AUC = 0.67 ± 0.05; p < 0.05) than FCM+AC (AUC = 0.50 ± 0.07), and PCA+AC (AUC = 0.49 ± 0.07). In this work, we presented SEAC, an accurate, general purpose AC segmentation tool that could be applied to any imaging domain that employs time series data. SE allows for projection of time series data into a PrEIm representation so that every voxel is characterized by the dominant eigenvectors, capturing the global and local time-intensity curve similarities in the data. This PrEIm allows for the calculation of strong tensor gradients and better region statistics than the original image intensities or alternative image representations such as PCA and FCM. The PrEIm also allows for building a more accurate hybrid AC scheme.
Corney, David; Haynes, John-Dylan; Rees, Geraint; Lotto, R. Beau
2009-01-01
Background The perception of brightness depends on spatial context: the same stimulus can appear light or dark depending on what surrounds it. A less well-known but equally important contextual phenomenon is that the colour of a stimulus can also alter its brightness. Specifically, stimuli that are more saturated (i.e. purer in colour) appear brighter than stimuli that are less saturated at the same luminance. Similarly, stimuli that are red or blue appear brighter than equiluminant yellow and green stimuli. This non-linear relationship between stimulus intensity and brightness, called the Helmholtz-Kohlrausch (HK) effect, was first described in the nineteenth century but has never been explained. Here, we take advantage of the relative simplicity of this ‘illusion’ to explain it and contextual effects more generally, by using a simple Bayesian ideal observer model of the human visual ecology. We also use fMRI brain scans to identify the neural correlates of brightness without changing the spatial context of the stimulus, which has complicated the interpretation of related fMRI studies. Results Rather than modelling human vision directly, we use a Bayesian ideal observer to model human visual ecology. We show that the HK effect is a result of encoding the non-linear statistical relationship between retinal images and natural scenes that would have been experienced by the human visual system in the past. We further show that the complexity of this relationship is due to the response functions of the cone photoreceptors, which themselves are thought to represent an efficient solution to encoding the statistics of images. Finally, we show that the locus of the response to the relationship between images and scenes lies in the primary visual cortex (V1), if not earlier in the visual system, since the brightness of colours (as opposed to their luminance) accords with activity in V1 as measured with fMRI. Conclusions The data suggest that perceptions of brightness represent a robust visual response to the likely sources of stimuli, as determined, in this instance, by the known statistical relationship between scenes and their retinal responses. While the responses of the early visual system (receptors in this case) may represent specifically the statistics of images, post receptor responses are more likely represent the statistical relationship between images and scenes. A corollary of this suggestion is that the visual cortex is adapted to relate the retinal image to behaviour given the statistics of its past interactions with the sources of retinal images: the visual cortex is adapted to the signals it receives from the eyes, and not directly to the world beyond. PMID:19333398
Zhang, Shuangyue; Han, Dong; Politte, David G; Williamson, Jeffrey F; O'Sullivan, Joseph A
2018-05-01
The purpose of this study was to assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. All three selected DECT-SPR models predict the SPR of all tissue types with less than 0.2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0.05% and 0.3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by sixfold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are insensitive to size change. Among all the investigated methods, the JSIR-BVM method achieves the best performance for SPR estimation in our simulation phantom study. This novel method is robust with respect to sinogram noise and residual beam-hardening effects, yielding SPR estimation errors comparable to intrinsic BVM modeling error. In contrast, the achievable SPR estimation accuracy of the image- and sinogram-domain decomposition methods is dominated by the CT image intensity uncertainties introduced by the reconstruction and decomposition processes. © 2018 American Association of Physicists in Medicine.
Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis
Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.
2006-01-01
In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709
a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image
NASA Astrophysics Data System (ADS)
Li, L.; Yang, H.; Chen, Q.; Liu, X.
2018-04-01
Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.
Delora, Adam; Gonzales, Aaron; Medina, Christopher S; Mitchell, Adam; Mohed, Abdul Faheem; Jacobs, Russell E; Bearer, Elaine L
2016-01-15
Magnetic resonance imaging (MRI) is a well-developed technique in neuroscience. Limitations in applying MRI to rodent models of neuropsychiatric disorders include the large number of animals required to achieve statistical significance, and the paucity of automation tools for the critical early step in processing, brain extraction, which prepares brain images for alignment and voxel-wise statistics. This novel timesaving automation of template-based brain extraction ("skull-stripping") is capable of quickly and reliably extracting the brain from large numbers of whole head images in a single step. The method is simple to install and requires minimal user interaction. This method is equally applicable to different types of MR images. Results were evaluated with Dice and Jacquard similarity indices and compared in 3D surface projections with other stripping approaches. Statistical comparisons demonstrate that individual variation of brain volumes are preserved. A downloadable software package not otherwise available for extraction of brains from whole head images is included here. This software tool increases speed, can be used with an atlas or a template from within the dataset, and produces masks that need little further refinement. Our new automation can be applied to any MR dataset, since the starting point is a template mask generated specifically for that dataset. The method reliably and rapidly extracts brain images from whole head images, rendering them useable for subsequent analytical processing. This software tool will accelerate the exploitation of mouse models for the investigation of human brain disorders by MRI. Copyright © 2015 Elsevier B.V. All rights reserved.
3D automatic anatomy recognition based on iterative graph-cut-ASM
NASA Astrophysics Data System (ADS)
Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.
2010-02-01
We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.
A comparison of linear interpolation models for iterative CT reconstruction.
Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric
2016-12-01
Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects of all models. The metrics include a surrogate for computational cost, as well as bias, noise, and an estimation task, all at matched resolution. The analysis revealed fundamental differences in terms of both bias and noise. Task-based assessment appears to be required to appreciate the differences in noise; the estimation task the authors selected showed that these differences balance out to yield similar performance. Some scenarios highlighted merits for the distance-driven method in terms of bias but with an increase in computational cost. Three combinations of statistical weights and penalty term showed that the observed differences remain the same, but strong edge-preserving penalty can dramatically reduce the magnitude of these differences. In many scenarios, Joseph's method seems to offer an interesting compromise between cost and computational effort. The distance-driven method offers the possibility to reduce bias but with an increase in computational cost. The bilinear method indicated that a key assumption in the other two methods is highly robust. Last, strong edge-preserving penalty can act as a compensator for insufficiencies in the forward projection model, bringing all models to similar levels in the most challenging imaging scenarios. Also, the authors find that their evaluation methodology helps appreciating how model, statistical weights, and penalty term interplay together.
Nonparametric statistical modeling of binary star separations
NASA Technical Reports Server (NTRS)
Heacox, William D.; Gathright, John
1994-01-01
We develop a comprehensive statistical model for the distribution of observed separations in binary star systems, in terms of distributions of orbital elements, projection effects, and distances to systems. We use this model to derive several diagnostics for estimating the completeness of imaging searches for stellar companions, and the underlying stellar multiplicities. In application to recent imaging searches for low-luminosity companions to nearby M dwarf stars, and for companions to young stars in nearby star-forming regions, our analyses reveal substantial uncertainty in estimates of stellar multiplicity. For binary stars with late-type dwarf companions, semimajor axes appear to be distributed approximately as a(exp -1) for values ranging from about one to several thousand astronomical units. About one-quarter of the companions to field F and G dwarf stars have semimajor axes less than 1 AU, and about 15% lie beyond 1000 AU. The geometric efficiency (fraction of companions imaged onto the detector) of imaging searches is nearly independent of distances to program stars and orbital eccentricities, and varies only slowly with detector spatial limitations.
Twenty-five years of maximum-entropy principle
NASA Astrophysics Data System (ADS)
Kapur, J. N.
1983-04-01
The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.
Detection of reflecting surfaces by a statistical model
NASA Astrophysics Data System (ADS)
He, Qiang; Chu, Chee-Hung H.
2009-02-01
Remote sensing is widely used assess the destruction from natural disasters and to plan relief and recovery operations. How to automatically extract useful features and segment interesting objects from digital images, including remote sensing imagery, becomes a critical task for image understanding. Unfortunately, current research on automated feature extraction is ignorant of contextual information. As a result, the fidelity of populating attributes corresponding to interesting features and objects cannot be satisfied. In this paper, we present an exploration on meaningful object extraction integrating reflecting surfaces. Detection of specular reflecting surfaces can be useful in target identification and then can be applied to environmental monitoring, disaster prediction and analysis, military, and counter-terrorism. Our method is based on a statistical model to capture the statistical properties of specular reflecting surfaces. And then the reflecting surfaces are detected through cluster analysis.
Quality assessment of butter cookies applying multispectral imaging
Andresen, Mette S; Dissing, Bjørn S; Løje, Hanne
2013-01-01
A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were calculated for a statistical prediction model correlating multispectral images with a browning score. The browning score is calculated as a function of oven temperature and baking time. It is presented as a quadratic response surface. The investigated process window was the intervals 4–16 min and 160–200°C in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis showed that the most significant wavelengths for browning predictions were in the interval 400–700 nm and the wavelengths significant for water prediction were primarily located in the near-infrared spectrum. The water prediction model was found to correctly estimate the average water content with an absolute error of 0.22%. From the images it was also possible to follow the browning and drying propagation from the cookie edge toward the center. PMID:24804036
"Big Data" in Rheumatology: Intelligent Data Modeling Improves the Quality of Imaging Data.
Landewé, Robert B M; van der Heijde, Désirée
2018-05-01
Analysis of imaging data in rheumatology is a challenge. Reliability of scores is an issue for several reasons. Signal-to-noise ratio of most imaging techniques is rather unfavorable (too little signal in relation to too much noise). Optimal use of all available data may help to increase credibility of imaging data, but knowledge of complicated statistical methodology and the help of skilled statisticians are required. Clinicians should appreciate the merits of sophisticated data modeling and liaise with statisticians to increase the quality of imaging results, as proper imaging studies in rheumatology imply more than a supersensitive imaging technique alone. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lanusse, Francois; Ravanbakhsh, Siamak; Mandelbaum, Rachel; Schneider, Jeff; Poczos, Barnabas
2017-01-01
Weak gravitational lensing has long been identified as one of the most powerful probes to investigate the nature of dark energy. As such, weak lensing is at the heart of the next generation of cosmological surveys such as LSST, Euclid or WFIRST.One particularly crititcal source of systematic errors in these surveys comes from the shape measurement algorithms tasked with estimating galaxy shapes. GREAT3, the last community challenge to assess the quality of state-of-the-art shape measurement algorithms has in particular demonstrated that all current methods are biased to various degrees and, more importantly, that these biases depend on the details of the galaxy morphologies. These biases can be measured and calibrated by generating mock observations where a known lensing signal has been introduced and comparing the resulting measurements to the ground-truth. Producing these mock observations however requires input galaxy images of higher resolution and S/N than the simulated survey, which typically implies acquiring extremely expensive space-based observations.The goal of this work is to train a deep generative model on already available Hubble Space Telescope data which can then be used to sample new galaxy images conditioned on parameters such as magnitude, size or redshift and exhibiting complex morphologies. Such model can allow us to inexpensively produce large set of realistic realistic images for calibration purposes.We implement a conditional generative model based on state-of-the-art deep learning methods and fit it to deep galaxy images from the COSMOS survey. The quality of the model is assessed by computing an extensive set of galaxy morphology statistics on the generated images. Beyond simple second moment statistics such as size and ellipticity, we apply more complex statistics specifically designed to be sensitive to disturbed galaxy morphologies. We find excellent agreement between the morphologies of real and model generated galaxies.Our results suggest that such deep generative models represent a reliable alternative to the acquisition of expensive high quality observations for generating the calibration data needed by the next generation of weak lensing surveys.
MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera
NASA Astrophysics Data System (ADS)
Wang, Hongkai; Stout, David B.; Taschereau, Richard; Gu, Zheng; Vu, Nam T.; Prout, David L.; Chatziioannou, Arion F.
2012-10-01
This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.
MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera.
Wang, Hongkai; Stout, David B; Taschereau, Richard; Gu, Zheng; Vu, Nam T; Prout, David L; Chatziioannou, Arion F
2012-10-07
This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.
Efficient detection of wound-bed and peripheral skin with statistical colour models.
Veredas, Francisco J; Mesa, Héctor; Morente, Laura
2015-04-01
A pressure ulcer is a clinical pathology of localised damage to the skin and underlying tissue caused by pressure, shear or friction. Reliable diagnosis supported by precise wound evaluation is crucial in order to success on treatment decisions. This paper presents a computer-vision approach to wound-area detection based on statistical colour models. Starting with a training set consisting of 113 real wound images, colour histogram models are created for four different tissue types. Back-projections of colour pixels on those histogram models are used, from a Bayesian perspective, to get an estimate of the posterior probability of a pixel to belong to any of those tissue classes. Performance measures obtained from contingency tables based on a gold standard of segmented images supplied by experts have been used for model selection. The resulting fitted model has been validated on a training set consisting of 322 wound images manually segmented and labelled by expert clinicians. The final fitted segmentation model shows robustness and gives high mean performance rates [(AUC: .9426 (SD .0563); accuracy: .8777 (SD .0799); F-score: 0.7389 (SD .1550); Cohen's kappa: .6585 (SD .1787)] when segmenting significant wound areas that include healing tissues.
NASA Astrophysics Data System (ADS)
Bonetto, P.; Qi, Jinyi; Leahy, R. M.
2000-08-01
Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.
Effective temperature in an interacting vertex system: theory and experiment on artificial spin ice.
Nisoli, Cristiano; Li, Jie; Ke, Xianglin; Garand, D; Schiffer, Peter; Crespi, Vincent H
2010-07-23
Frustrated arrays of interacting single-domain nanomagnets provide important model systems for statistical mechanics, as they map closely onto well-studied vertex models and are amenable to direct imaging and custom engineering. Although these systems are manifestly athermal, we demonstrate that an effective temperature, controlled by an external magnetic drive, describes their microstates and therefore their full statistical properties.
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.
2015-03-01
Traumatic brain injury (TBI) is a major cause of death and disability. The current front-line imaging modality for TBI detection is CT, which reliably detects intracranial hemorrhage (fresh blood contrast 30-50 HU, size down to 1 mm) in non-contrast-enhanced exams. Compared to CT, flat-panel detector (FPD) cone-beam CT (CBCT) systems offer lower cost, greater portability, and smaller footprint suitable for point-of-care deployment. We are developing FPD-CBCT to facilitate TBI detection at the point-of-care such as in emergent, ambulance, sports, and military applications. However, current FPD-CBCT systems generally face challenges in low-contrast, soft-tissue imaging. Model-based reconstruction can improve image quality in soft-tissue imaging compared to conventional filtered back-projection (FBP) by leveraging high-fidelity forward model and sophisticated regularization. In FPD-CBCT TBI imaging, measurement noise characteristics undergo substantial change following artifact correction, resulting in non-negligible noise amplification. In this work, we extend the penalized weighted least-squares (PWLS) image reconstruction to include the two dominant artifact corrections (scatter and beam hardening) in FPD-CBCT TBI imaging by correctly modeling the variance change following each correction. Experiments were performed on a CBCT test-bench using an anthropomorphic phantom emulating intra-parenchymal hemorrhage in acute TBI, and the proposed method demonstrated an improvement in blood-brain contrast-to-noise ratio (CNR = 14.2) compared to FBP (CNR = 9.6) and PWLS using conventional weights (CNR = 11.6) at fixed spatial resolution (1 mm edge-spread width at the target contrast). The results support the hypothesis that FPD-CBCT can fulfill the image quality requirements for reliable TBI detection, using high-fidelity artifact correction and statistical reconstruction with accurate post-artifact-correction noise models.
Robust crop and weed segmentation under uncontrolled outdoor illumination
USDA-ARS?s Scientific Manuscript database
A new machine vision for weed detection was developed from RGB color model images. Processes included in the algorithm for the detection were excessive green conversion, threshold value computation by statistical analysis, adaptive image segmentation by adjusting the threshold value, median filter, ...
Probabilistic inversion with graph cuts: Application to the Boise Hydrogeophysical Research Site
NASA Astrophysics Data System (ADS)
Pirot, Guillaume; Linde, Niklas; Mariethoz, Grégoire; Bradford, John H.
2017-02-01
Inversion methods that build on multiple-point statistics tools offer the possibility to obtain model realizations that are not only in agreement with field data, but also with conceptual geological models that are represented by training images. A recent inversion approach based on patch-based geostatistical resimulation using graph cuts outperforms state-of-the-art multiple-point statistics methods when applied to synthetic inversion examples featuring continuous and discontinuous property fields. Applications of multiple-point statistics tools to field data are challenging due to inevitable discrepancies between actual subsurface structure and the assumptions made in deriving the training image. We introduce several amendments to the original graph cut inversion algorithm and present a first-ever field application by addressing porosity estimation at the Boise Hydrogeophysical Research Site, Boise, Idaho. We consider both a classical multi-Gaussian and an outcrop-based prior model (training image) that are in agreement with available porosity data. When conditioning to available crosshole ground-penetrating radar data using Markov chain Monte Carlo, we find that the posterior realizations honor overall both the characteristics of the prior models and the geophysical data. The porosity field is inverted jointly with the measurement error and the petrophysical parameters that link dielectric permittivity to porosity. Even though the multi-Gaussian prior model leads to posterior realizations with higher likelihoods, the outcrop-based prior model shows better convergence. In addition, it offers geologically more realistic posterior realizations and it better preserves the full porosity range of the prior.
Statistical Method to Overcome Overfitting Issue in Rational Function Models
NASA Astrophysics Data System (ADS)
Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.
2017-09-01
Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.
Tsai, Tsung-Yuan; Li, Jing-Sheng; Wang, Shaobai; Li, Pingyue; Kwon, Young-Min; Li, Guoan
2013-01-01
The statistical shape model (SSM) method that uses 2D images of the knee joint to predict the 3D joint surface model has been reported in literature. In this study, we constructed a SSM database using 152 human CT knee joint models, including the femur, tibia and patella and analyzed the characteristics of each principal component of the SSM. The surface models of two in vivo knees were predicted using the SSM and their 2D bi-plane fluoroscopic images. The predicted models were compared to their CT joint models. The differences between the predicted 3D knee joint surfaces and the CT image-based surfaces were 0.30 ± 0.81 mm, 0.34 ± 0.79 mm and 0.36 ± 0.59 mm for the femur, tibia and patella, respectively (average ± standard deviation). The computational time for each bone of the knee joint was within 30 seconds using a personal computer. The analysis of this study indicated that the SSM method could be a useful tool to construct 3D surface models of the knee with sub-millimeter accuracy in real time. Thus it may have a broad application in computer assisted knee surgeries that require 3D surface models of the knee. PMID:24156375
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München
2015-02-01
I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less
Basak, Kausik; Dey, Goutam; Mahadevappa, Manjunatha; Mandal, Mahitosh; Sheet, Debdoot; Dutta, Pranab Kumar
2016-09-01
Laser speckle contrast imaging (LSCI) provides a noninvasive and cost effective solution for in vivo monitoring of blood flow. So far, most of the researches consider changes in speckle pattern (i.e. correlation time of speckle intensity fluctuation), account for relative change in blood flow during abnormal conditions. This paper introduces an application of LSCI for monitoring wound progression and characterization of cutaneous wound regions on mice model. Speckle images are captured on a tumor wound region at mice leg in periodic interval. Initially, raw speckle images are converted to their corresponding contrast images. Functional characterization begins with first segmenting the affected area using k-means clustering, taking wavelet energies in a local region as feature set. In the next stage, different regions in wound bed are clustered based on progressive and non-progressive nature of tissue properties. Changes in contrast due to heterogeneity in tissue structure and functionality are modeled using LSCI speckle statistics. Final characterization is achieved through supervised learning of these speckle statistics using support vector machine. On cross evaluation with mice model experiment, the proposed approach classifies the progressive and non-progressive wound regions with an average sensitivity of 96.18%, 97.62% and average specificity of 97.24%, 96.42% respectively. The clinical information yield with this approach is validated with the conventional immunohistochemistry result of wound to justify the ability of LSCI for in vivo, noninvasive and periodic assessment of wounds. Copyright © 2016 Elsevier Inc. All rights reserved.
Topographic correction realization based on the CBERS-02B image
NASA Astrophysics Data System (ADS)
Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua
2011-08-01
The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.
Teruel, Jose R; Goa, Pål E; Sjøbakk, Torill E; Østlie, Agnes; Fjøsne, Hans E; Bathen, Tone F
2016-05-01
To compare "standard" diffusion weighted imaging, and diffusion tensor imaging (DTI) of 2(nd) and 4(th) -order for the differentiation of malignant and benign breast lesions. Seventy-one patients were imaged at 3 Tesla with a 16-channel breast coil. A diffusion weighted MRI sequence including b = 0 and b = 700 in 30 directions was obtained for all patients. The image data were fitted to three different diffusion models: isotropic model - apparent diffusion coefficient (ADC), 2(nd) -order tensor model (the standard model used for DTI) and a 4(th) -order tensor model, with increased degrees of freedom to describe anisotropy. The ability of the fitted parameters in the different models to differentiate between malignant and benign tumors was analyzed. Seventy-two breast lesions were analyzed, out of which 38 corresponded to malignant and 34 to benign tumors. ADC (using any model) presented the highest discriminative ability of malignant from benign tumors with a receiver operating characteristic area under the curve (AUC) of 0.968, and sensitivity and specificity of 94.1% and 94.7% respectively for a 1.33 × 10(-3) mm(2) /s cutoff. Anisotropy measurements presented high statistical significance between malignant and benign tumors (P < 0.001), but with lower discriminative ability of malignant from benign tumors than ADC (AUC of 0.896 and 0.897 for fractional anisotropy and generalized anisotropy respectively). Statistical significant difference was found between generalized anisotropy and fractional anisotropy for cancers (P < 0.001) but not for benign lesions (P = 0.87). While anisotropy parameters have the potential to provide additional value for breast applications as demonstrated in this study, ADC exhibited the highest differentiation power between malignant and benign breast tumors. © 2015 Wiley Periodicals, Inc.
MO-FG-209-05: Towards a Feature-Based Anthropomorphic Model Observer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avanaki, A.
2016-06-15
This symposium will review recent advances in the simulation methods for evaluation of novel breast imaging systems – the subject of AAPM Task Group TG234. Our focus will be on the various approaches to development and validation of software anthropomorphic phantoms and their use in the statistical assessment of novel imaging systems using such phantoms along with computational models for the x-ray image formation process. Due to the dynamic development and complex design of modern medical imaging systems, the simulation of anatomical structures, image acquisition modalities, and the image perception and analysis offers substantial benefits of reduced cost, duration, andmore » radiation exposure, as well as the known ground-truth and wide variability in simulated anatomies. For these reasons, Virtual Clinical Trials (VCTs) have been increasingly accepted as a viable tool for preclinical assessment of x-ray and other breast imaging methods. Activities of TG234 have encompassed the optimization of protocols for simulation studies, including phantom specifications, the simulated data representation, models of the imaging process, and statistical assessment of simulated images. The symposium will discuss the state-of-the-science of VCTs for novel breast imaging systems, emphasizing recent developments and future directions. Presentations will discuss virtual phantoms for intermodality breast imaging performance comparisons, extension of the breast anatomy simulation to the cellular level, optimized integration of the simulated imaging chain, and the novel directions in the observer models design. Learning Objectives: Review novel results in developing and applying virtual phantoms for inter-modality breast imaging performance comparisons; Discuss the efforts to extend the computer simulation of breast anatomy and pathology to the cellular level; Summarize the state of the science in optimized integration of modules in the simulated imaging chain; Compare novel directions in the design of observer models for task based validation of imaging systems. PB: Research funding support from the NIH, NSF, and Komen for the Cure; NIH funded collaboration with Barco, Inc. and Hologic, Inc.; Consultant to Delaware State Univ. and NCCPM, UK. AA: Employed at Barco Healthcare.; P. Bakic, NIH: (NIGMS P20 #GM103446, NCI R01 #CA154444); M. Das, NIH Research grants.« less
Vehicle track segmentation using higher order random fields
Quach, Tu -Thach
2017-01-09
Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less
Vehicle track segmentation using higher order random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quach, Tu -Thach
Here, we present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure onmore » a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.« less
NASA Astrophysics Data System (ADS)
Cui, Zhe; Wang, Anting; Ma, Qianli; Ming, Hai
2013-12-01
In this paper, the laser speckle pattern on human retina for a laser projection display is simulated. By introducing a specific eye model `Indiana Eye', the statistical properties of the laser speckle are numerical investigated. The results show that the aberrations of human eye (mostly spherical and chromatic) will decrease the speckle contrast felt by people. When the wavelength of the laser source is 550 nm (green), people will feel the strongest speck pattern and the weakest when the wavelength is 450 nm (blue). Myopia and hyperopia will decrease the speckle contrast by introducing large spherical aberrations. Although aberration is good for speckle reduction, but it will degrade the imaging capability of the eye. The results show that laser source (650 nm) will have the best image quality on the retina. At last, we compare the human eye with an aberration-free imaging system. Both the speckle contrast and the image quality appear different behavior in these two imaging systems. The results are useful when a standardized measurement procedure for speckle contrast needs to be built.
Imfit: A Fast, Flexible Program for Astronomical Image Fitting
NASA Astrophysics Data System (ADS)
Erwin, Peter
2014-08-01
Imift is an open-source astronomical image-fitting program specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. Its object-oriented design allows new types of image components (2D surface-brightness functions) to be easily written and added to the program. Image functions provided with Imfit include Sersic, exponential, and Gaussian galaxy decompositions along with Core-Sersic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through 3D luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard chi^2 statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or the Cash statistic; the latter is particularly appropriate for cases of Poisson data in the low-count regime. The C++ source code for Imfit is available under the GNU Public License.
NASA Astrophysics Data System (ADS)
Høyer, Anne-Sophie; Vignoli, Giulio; Mejer Hansen, Thomas; Thanh Vu, Le; Keefer, Donald A.; Jørgensen, Flemming
2017-12-01
Most studies on the application of geostatistical simulations based on multiple-point statistics (MPS) to hydrogeological modelling focus on relatively fine-scale models and concentrate on the estimation of facies-level structural uncertainty. Much less attention is paid to the use of input data and optimal construction of training images. For instance, even though the training image should capture a set of spatial geological characteristics to guide the simulations, the majority of the research still relies on 2-D or quasi-3-D training images. In the present study, we demonstrate a novel strategy for 3-D MPS modelling characterized by (i) realistic 3-D training images and (ii) an effective workflow for incorporating a diverse group of geological and geophysical data sets. The study covers an area of 2810 km2 in the southern part of Denmark. MPS simulations are performed on a subset of the geological succession (the lower to middle Miocene sediments) which is characterized by relatively uniform structures and dominated by sand and clay. The simulated domain is large and each of the geostatistical realizations contains approximately 45 million voxels with size 100 m × 100 m × 5 m. Data used for the modelling include water well logs, high-resolution seismic data, and a previously published 3-D geological model. We apply a series of different strategies for the simulations based on data quality, and develop a novel method to effectively create observed spatial trends. The training image is constructed as a relatively small 3-D voxel model covering an area of 90 km2. We use an iterative training image development strategy and find that even slight modifications in the training image create significant changes in simulations. Thus, this study shows how to include both the geological environment and the type and quality of input information in order to achieve optimal results from MPS modelling. We present a practical workflow to build the training image and effectively handle different types of input information to perform large-scale geostatistical modelling.
Zhang, Zhiqing; Kuzmin, Nikolay V; Groot, Marie Louise; de Munck, Jan C
2017-06-01
The morphologies contained in 3D third harmonic generation (THG) images of human brain tissue can report on the pathological state of the tissue. However, the complexity of THG brain images makes the usage of modern image processing tools, especially those of image filtering, segmentation and validation, to extract this information challenging. We developed a salient edge-enhancing model of anisotropic diffusion for image filtering, based on higher order statistics. We split the intrinsic 3-phase segmentation problem into two 2-phase segmentation problems, each of which we solved with a dedicated model, active contour weighted by prior extreme. We applied the novel proposed algorithms to THG images of structurally normal ex-vivo human brain tissue, revealing key tissue components-brain cells, microvessels and neuropil, enabling statistical characterization of these components. Comprehensive comparison to manually delineated ground truth validated the proposed algorithms. Quantitative comparison to second harmonic generation/auto-fluorescence images, acquired simultaneously from the same tissue area, confirmed the correctness of the main THG features detected. The software and test datasets are available from the authors. z.zhang@vu.nl. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Li, Jing-Sheng; Tsai, Tsung-Yuan; Wang, Shaobai; Li, Pingyue; Kwon, Young-Min; Freiberg, Andrew; Rubash, Harry E.; Li, Guoan
2014-01-01
Using computed tomography (CT) or magnetic resonance (MR) images to construct 3D knee models has been widely used in biomedical engineering research. Statistical shape modeling (SSM) method is an alternative way to provide a fast, cost-efficient, and subject-specific knee modeling technique. This study was aimed to evaluate the feasibility of using a combined dual-fluoroscopic imaging system (DFIS) and SSM method to investigate in vivo knee kinematics. Three subjects were studied during a treadmill walking. The data were compared with the kinematics obtained using a CT-based modeling technique. Geometric root-mean-square (RMS) errors between the knee models constructed using the SSM and CT-based modeling techniques were 1.16 mm and 1.40 mm for the femur and tibia, respectively. For the kinematics of the knee during the treadmill gait, the SSM model can predict the knee kinematics with RMS errors within 3.3 deg for rotation and within 2.4 mm for translation throughout the stance phase of the gait cycle compared with those obtained using the CT-based knee models. The data indicated that the combined DFIS and SSM technique could be used for quick evaluation of knee joint kinematics. PMID:25320846
Estimation of images degraded by film-grain noise.
Naderi, F; Sawchuk, A A
1978-04-15
Film-grain noise describes the intrinsic noise produced by a photographic emulsion during the process of image recording and reproduction. In this paper we consider the restoration of images degraded by film-grain noise. First a detailed model for the over-all photographic imaging system is presented. The model includes linear blurring effects and the signal-dependent effect of film-grain noise. The accuracy of this model is tested by simulating images according to it and comparing the results to images of similar targets that were actually recorded on film. The restoration of images degraded by film-grain noise is then considered in the context of estimation theory. A discrete Wiener filer is developed which explicitly allows for the signal dependence of the noise. The filter adaptively alters its characteristics based on the nonstationary first order statistics of an image and is shown to have advantages over the conventional Wiener filter. Experimental results for modeling and the adaptive estimation filter are presented.
Dillman, Jonathan R.; Goodsitt, Mitchell M.; Christodoulou, Emmanuel G.; Keshavarzi, Nahid; Strouse, Peter J.
2014-01-01
Purpose To retrospectively compare image quality and radiation dose between a reduced-dose computed tomographic (CT) protocol that uses model-based iterative reconstruction (MBIR) and a standard-dose CT protocol that uses 30% adaptive statistical iterative reconstruction (ASIR) with filtered back projection. Materials and Methods Institutional review board approval was obtained. Clinical CT images of the chest, abdomen, and pelvis obtained with a reduced-dose protocol were identified. Images were reconstructed with two algorithms: MBIR and 100% ASIR. All subjects had undergone standard-dose CT within the prior year, and the images were reconstructed with 30% ASIR. Reduced- and standard-dose images were evaluated objectively and subjectively. Reduced-dose images were evaluated for lesion detectability. Spatial resolution was assessed in a phantom. Radiation dose was estimated by using volumetric CT dose index (CTDIvol) and calculated size-specific dose estimates (SSDE). A combination of descriptive statistics, analysis of variance, and t tests was used for statistical analysis. Results In the 25 patients who underwent the reduced-dose protocol, mean decrease in CTDIvol was 46% (range, 19%–65%) and mean decrease in SSDE was 44% (range, 19%–64%). Reduced-dose MBIR images had less noise (P > .004). Spatial resolution was superior for reduced-dose MBIR images. Reduced-dose MBIR images were equivalent to standard-dose images for lungs and soft tissues (P > .05) but were inferior for bones (P = .004). Reduced-dose 100% ASIR images were inferior for soft tissues (P < .002), lungs (P < .001), and bones (P < .001). By using the same reduced-dose acquisition, lesion detectability was better (38% [32 of 84 rated lesions]) or the same (62% [52 of 84 rated lesions]) with MBIR as compared with 100% ASIR. Conclusion CT performed with a reduced-dose protocol and MBIR is feasible in the pediatric population, and it maintains diagnostic quality. © RSNA, 2013 Online supplemental material is available for this article. PMID:24091359
No-reference image quality assessment based on statistics of convolution feature maps
NASA Astrophysics Data System (ADS)
Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo
2018-04-01
We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.
Primal/dual linear programming and statistical atlases for cartilage segmentation.
Glocker, Ben; Komodakis, Nikos; Paragios, Nikos; Glaser, Christian; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel approach for automatic segmentation of cartilage using a statistical atlas and efficient primal/dual linear programming. To this end, a novel statistical atlas construction is considered from registered training examples. Segmentation is then solved through registration which aims at deforming the atlas such that the conditional posterior of the learned (atlas) density is maximized with respect to the image. Such a task is reformulated using a discrete set of deformations and segmentation becomes equivalent to finding the set of local deformations which optimally match the model to the image. We evaluate our method on 56 MRI data sets (28 used for the model and 28 used for evaluation) and obtain a fully automatic segmentation of patella cartilage volume with an overlap ratio of 0.84 with a sensitivity and specificity of 94.06% and 99.92%, respectively.
NASA Astrophysics Data System (ADS)
Huang, Jinxin; Clarkson, Eric; Kupinski, Matthew; Rolland, Jannick P.
2014-03-01
The prevalence of Dry Eye Disease (DED) in the USA is approximately 40 million in aging adults with about $3.8 billion economic burden. However, a comprehensive understanding of tear film dynamics, which is the prerequisite to advance the management of DED, is yet to be realized. To extend our understanding of tear film dynamics, we investigate the simultaneous estimation of the lipid and aqueous layers thicknesses with the combination of optical coherence tomography (OCT) and statistical decision theory. In specific, we develop a mathematical model for Fourier-domain OCT where we take into account the different statistical processes associated with the imaging chain. We formulate the first-order and second-order statistical quantities of the output of the OCT system, which can generate some simulated OCT spectra. A tear film model, which includes a lipid and aqueous layer on top of a rough corneal surface, is the object being imaged. Then we further implement a Maximum-likelihood (ML) estimator to interpret the simulated OCT data to estimate the thicknesses of both layers of the tear film. Results show that an axial resolution of 1 μm allows estimates down to nanometers scale. We use the root mean square error of the estimates as a metric to evaluate the system parameters, such as the tradeoff between the imaging speed and the precision of estimation. This framework further provides the theoretical basics to optimize the imaging setup for a specific thickness estimation task.
The use of algorithmic behavioural transfer functions in parametric EO system performance models
NASA Astrophysics Data System (ADS)
Hickman, Duncan L.; Smith, Moira I.
2015-10-01
The use of mathematical models to predict the overall performance of an electro-optic (EO) system is well-established as a methodology and is used widely to support requirements definition, system design, and produce performance predictions. Traditionally these models have been based upon cascades of transfer functions based on established physical theory, such as the calculation of signal levels from radiometry equations, as well as the use of statistical models. However, the performance of an EO system is increasing being dominated by the on-board processing of the image data and this automated interpretation of image content is complex in nature and presents significant modelling challenges. Models and simulations of EO systems tend to either involve processing of image data as part of a performance simulation (image-flow) or else a series of mathematical functions that attempt to define the overall system characteristics (parametric). The former approach is generally more accurate but statistically and theoretically weak in terms of specific operational scenarios, and is also time consuming. The latter approach is generally faster but is unable to provide accurate predictions of a system's performance under operational conditions. An alternative and novel architecture is presented in this paper which combines the processing speed attributes of parametric models with the accuracy of image-flow representations in a statistically valid framework. An additional dimension needed to create an effective simulation is a robust software design whose architecture reflects the structure of the EO System and its interfaces. As such, the design of the simulator can be viewed as a software prototype of a new EO System or an abstraction of an existing design. This new approach has been used successfully to model a number of complex military systems and has been shown to combine improved performance estimation with speed of computation. Within the paper details of the approach and architecture are described in detail, and example results based on a practical application are then given which illustrate the performance benefits. Finally, conclusions are drawn and comments given regarding the benefits and uses of the new approach.
Robust biological parametric mapping: an improved technique for multimodal brain image analysis
NASA Astrophysics Data System (ADS)
Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.
2011-03-01
Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.
Quantitative metrics for assessment of chemical image quality and spatial resolution
Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.
2016-02-28
Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less
Quantitative metrics for assessment of chemical image quality and spatial resolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.
Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less
[Road Extraction in Remote Sensing Images Based on Spectral and Edge Analysis].
Zhao, Wen-zhi; Luo, Li-qun; Guo, Zhou; Yue, Jun; Yu, Xue-ying; Liu, Hui; Wei, Jing
2015-10-01
Roads are typically man-made objects in urban areas. Road extraction from high-resolution images has important applications for urban planning and transportation development. However, due to the confusion of spectral characteristic, it is difficult to distinguish roads from other objects by merely using traditional classification methods that mainly depend on spectral information. Edge is an important feature for the identification of linear objects (e. g. , roads). The distribution patterns of edges vary greatly among different objects. It is crucial to merge edge statistical information into spectral ones. In this study, a new method that combines spectral information and edge statistical features has been proposed. First, edge detection is conducted by using self-adaptive mean-shift algorithm on the panchromatic band, which can greatly reduce pseudo-edges and noise effects. Then, edge statistical features are obtained from the edge statistical model, which measures the length and angle distribution of edges. Finally, by integrating the spectral and edge statistical features, SVM algorithm is used to classify the image and roads are ultimately extracted. A series of experiments are conducted and the results show that the overall accuracy of proposed method is 93% comparing with only 78% overall accuracy of the traditional. The results demonstrate that the proposed method is efficient and valuable for road extraction, especially on high-resolution images.
Initial evaluation of discrete orthogonal basis reconstruction of ECT images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moody, E.B.; Donohue, K.D.
1996-12-31
Discrete orthogonal basis restoration (DOBR) is a linear, non-iterative, and robust method for solving inverse problems for systems characterized by shift-variant transfer functions. This simulation study evaluates the feasibility of using DOBR for reconstructing emission computed tomographic (ECT) images. The imaging system model uses typical SPECT parameters and incorporates the effects of attenuation, spatially-variant PSF, and Poisson noise in the projection process. Sample reconstructions and statistical error analyses for a class of digital phantoms compare the DOBR performance for Hartley and Walsh basis functions. Test results confirm that DOBR with either basis set produces images with good statistical properties. Nomore » problems were encountered with reconstruction instability. The flexibility of the DOBR method and its consistent performance warrants further investigation of DOBR as a means of ECT image reconstruction.« less
NASA Astrophysics Data System (ADS)
Chen, Y.; Zhang, Y.; Gao, J.; Yuan, Y.; Lv, Z.
2018-04-01
Recently, built-up area detection from high-resolution satellite images (HRSI) has attracted increasing attention because HRSI can provide more detailed object information. In this paper, multi-resolution wavelet transform and local spatial autocorrelation statistic are introduced to model the spatial patterns of built-up areas. First, the input image is decomposed into high- and low-frequency subbands by wavelet transform at three levels. Then the high-frequency detail information in three directions (horizontal, vertical and diagonal) are extracted followed by a maximization operation to integrate the information in all directions. Afterward, a cross-scale operation is implemented to fuse different levels of information. Finally, local spatial autocorrelation statistic is introduced to enhance the saliency of built-up features and an adaptive threshold algorithm is used to achieve the detection of built-up areas. Experiments are conducted on ZY-3 and Quickbird panchromatic satellite images, and the results show that the proposed method is very effective for built-up area detection.
Zoletić, Emina; Duraković-Belko, Elvira
2009-09-01
The aim of this research was to examine differences among two groups of girls, models and ballerinas (with risk factors - experimental group) and young students (control group), in body image preception, body mass index, neurotic perfectionism, body - image distortion and simptoms of eating disorders. The research was conducted with 91 participants divided in two groups, control group - 55 students at University of Sarajevo and experimental group - 13 professional ballerinas, 23 professional models. During this research work we used several measuring instruments: Body mass index; Body Mass Index - Silhouette Matching Test; Neurotic Perfectionism Questioner; Eating Disorders Inventory. In this study statistically significant differences occur between these two groups of girls which are related to body - image perception and objective position on the body mass index chart. All 91 participants saw themselves as obese. Statistically significant differences occur in body - image distortion and simptoms of eating disorders in the experimental group, they have high scores on body - image distortion, on eating disorders inventory and neurotic perfectionism simptoms. Based on th results in this study, we may conclude that there are groups of people who have risk factors for developing some kind of eating disorder, so it would be desirable to create a preventive intervention for young ballerinas and models, but also for those people who coach them, trainers, instructors and managers. For other population groups with risky behavior such as young children, a prevention plan and modifications of cultural influences on people's opinion of body image are extremely important.
Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth
2015-03-16
Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with "equivalent" 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. © 2015 ARVO.
Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth
2015-01-01
Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with “equivalent” 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. PMID:25780063
Validation tools for image segmentation
NASA Astrophysics Data System (ADS)
Padfield, Dirk; Ross, James
2009-02-01
A large variety of image analysis tasks require the segmentation of various regions in an image. For example, segmentation is required to generate accurate models of brain pathology that are important components of modern diagnosis and therapy. While the manual delineation of such structures gives accurate information, the automatic segmentation of regions such as the brain and tumors from such images greatly enhances the speed and repeatability of quantifying such structures. The ubiquitous need for such algorithms has lead to a wide range of image segmentation algorithms with various assumptions, parameters, and robustness. The evaluation of such algorithms is an important step in determining their effectiveness. Therefore, rather than developing new segmentation algorithms, we here describe validation methods for segmentation algorithms. Using similarity metrics comparing the automatic to manual segmentations, we demonstrate methods for optimizing the parameter settings for individual cases and across a collection of datasets using the Design of Experiment framework. We then employ statistical analysis methods to compare the effectiveness of various algorithms. We investigate several region-growing algorithms from the Insight Toolkit and compare their accuracy to that of a separate statistical segmentation algorithm. The segmentation algorithms are used with their optimized parameters to automatically segment the brain and tumor regions in MRI images of 10 patients. The validation tools indicate that none of the ITK algorithms studied are able to outperform with statistical significance the statistical segmentation algorithm although they perform reasonably well considering their simplicity.
a Comparative Analysis of Spatiotemporal Data Fusion Models for Landsat and Modis Data
NASA Astrophysics Data System (ADS)
Hazaymeh, K.; Almagbile, A.
2018-04-01
In this study, three documented spatiotemporal data fusion models were applied to Landsat-7 and MODIS surface reflectance, and NDVI. The algorithms included the spatial and temporal adaptive reflectance fusion model (STARFM), sparse representation based on a spatiotemporal reflectance fusion model (SPSTFM), and spatiotemporal image-fusion model (STI-FM). The objectives of this study were to (i) compare the performance of these three fusion models using a one Landsat-MODIS spectral reflectance image pairs using time-series datasets from the Coleambally irrigation area in Australia, and (ii) quantitatively evaluate the accuracy of the synthetic images generated from each fusion model using statistical measurements. Results showed that the three fusion models predicted the synthetic Landsat-7 image with adequate agreements. The STI-FM produced more accurate reconstructions of both Landsat-7 spectral bands and NDVI. Furthermore, it produced surface reflectance images having the highest correlation with the actual Landsat-7 images. This study indicated that STI-FM would be more suitable for spatiotemporal data fusion applications such as vegetation monitoring, drought monitoring, and evapotranspiration.
Scene-based nonuniformity correction and enhancement: pixel statistics and subpixel motion.
Zhao, Wenyi; Zhang, Chao
2008-07-01
We propose a framework for scene-based nonuniformity correction (NUC) and nonuniformity correction and enhancement (NUCE) that is required for focal-plane array-like sensors to obtain clean and enhanced-quality images. The core of the proposed framework is a novel registration-based nonuniformity correction super-resolution (NUCSR) method that is bootstrapped by statistical scene-based NUC methods. Based on a comprehensive imaging model and an accurate parametric motion estimation, we are able to remove severe/structured nonuniformity and in the presence of subpixel motion to simultaneously improve image resolution. One important feature of our NUCSR method is the adoption of a parametric motion model that allows us to (1) handle many practical scenarios where parametric motions are present and (2) carry out perfect super-resolution in principle by exploring available subpixel motions. Experiments with real data demonstrate the efficiency of the proposed NUCE framework and the effectiveness of the NUCSR method.
NASA Astrophysics Data System (ADS)
Li, Qin; Berman, Benjamin P.; Schumacher, Justin; Liang, Yongguang; Gavrielides, Marios A.; Yang, Hao; Zhao, Binsheng; Petrick, Nicholas
2017-03-01
Tumor volume measured from computed tomography images is considered a biomarker for disease progression or treatment response. The estimation of the tumor volume depends on the imaging system parameters selected, as well as lesion characteristics. In this study, we examined how different image reconstruction methods affect the measurement of lesions in an anthropomorphic liver phantom with a non-uniform background. Iterative statistics-based and model-based reconstructions, as well as filtered back-projection, were evaluated and compared in this study. Statistics-based and filtered back-projection yielded similar estimation performance, while model-based yielded higher precision but lower accuracy in the case of small lesions. Iterative reconstructions exhibited higher signal-to-noise ratio but slightly lower contrast of the lesion relative to the background. A better understanding of lesion volumetry performance as a function of acquisition parameters and lesion characteristics can lead to its incorporation as a routine sizing tool.
Variance analysis of x-ray CT sinograms in the presence of electronic noise background.
Ma, Jianhua; Liang, Zhengrong; Fan, Yi; Liu, Yan; Huang, Jing; Chen, Wufan; Lu, Hongbing
2012-07-01
Low-dose x-ray computed tomography (CT) is clinically desired. Accurate noise modeling is a fundamental issue for low-dose CT image reconstruction via statistics-based sinogram restoration or statistical iterative image reconstruction. In this paper, the authors analyzed the statistical moments of low-dose CT data in the presence of electronic noise background. The authors first studied the statistical moment properties of detected signals in CT transmission domain, where the noise of detected signals is considered as quanta fluctuation upon electronic noise background. Then the authors derived, via the Taylor expansion, a new formula for the mean-variance relationship of the detected signals in CT sinogram domain, wherein the image formation becomes a linear operation between the sinogram data and the unknown image, rather than a nonlinear operation in the CT transmission domain. To get insight into the derived new formula by experiments, an anthropomorphic torso phantom was scanned repeatedly by a commercial CT scanner at five different mAs levels from 100 down to 17. The results demonstrated that the electronic noise background is significant when low-mAs (or low-dose) scan is performed. The influence of the electronic noise background should be considered in low-dose CT imaging.
Variance analysis of x-ray CT sinograms in the presence of electronic noise background
Ma, Jianhua; Liang, Zhengrong; Fan, Yi; Liu, Yan; Huang, Jing; Chen, Wufan; Lu, Hongbing
2012-01-01
Purpose: Low-dose x-ray computed tomography (CT) is clinically desired. Accurate noise modeling is a fundamental issue for low-dose CT image reconstruction via statistics-based sinogram restoration or statistical iterative image reconstruction. In this paper, the authors analyzed the statistical moments of low-dose CT data in the presence of electronic noise background. Methods: The authors first studied the statistical moment properties of detected signals in CT transmission domain, where the noise of detected signals is considered as quanta fluctuation upon electronic noise background. Then the authors derived, via the Taylor expansion, a new formula for the mean–variance relationship of the detected signals in CT sinogram domain, wherein the image formation becomes a linear operation between the sinogram data and the unknown image, rather than a nonlinear operation in the CT transmission domain. To get insight into the derived new formula by experiments, an anthropomorphic torso phantom was scanned repeatedly by a commercial CT scanner at five different mAs levels from 100 down to 17. Results: The results demonstrated that the electronic noise background is significant when low-mAs (or low-dose) scan is performed. Conclusions: The influence of the electronic noise background should be considered in low-dose CT imaging. PMID:22830738
Pourghassem, Hossein
2012-01-01
Material detection is a vital need in dual energy X-ray luggage inspection systems at security of airport and strategic places. In this paper, a novel material detection algorithm based on statistical trainable models using 2-Dimensional power density function (PDF) of three material categories in dual energy X-ray images is proposed. In this algorithm, the PDF of each material category as a statistical model is estimated from transmission measurement values of low and high energy X-ray images by Gaussian Mixture Models (GMM). Material label of each pixel of object is determined based on dependency probability of its transmission measurement values in the low and high energy to PDF of three material categories (metallic, organic and mixed materials). The performance of material detection algorithm is improved by a maximum voting scheme in a neighborhood of image as a post-processing stage. Using two background removing and denoising stages, high and low energy X-ray images are enhanced as a pre-processing procedure. For improving the discrimination capability of the proposed material detection algorithm, the details of the low and high energy X-ray images are added to constructed color image which includes three colors (orange, blue and green) for representing the organic, metallic and mixed materials. The proposed algorithm is evaluated on real images that had been captured from a commercial dual energy X-ray luggage inspection system. The obtained results show that the proposed algorithm is effective and operative in detection of the metallic, organic and mixed materials with acceptable accuracy.
Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George
2009-08-01
We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.
A Patch-Based Method for Repetitive and Transient Event Detection in Fluorescence Imaging
Boulanger, Jérôme; Gidon, Alexandre; Kervran, Charles; Salamero, Jean
2010-01-01
Automatic detection and characterization of molecular behavior in large data sets obtained by fast imaging in advanced light microscopy become key issues to decipher the dynamic architectures and their coordination in the living cell. Automatic quantification of the number of sudden and transient events observed in fluorescence microscopy is discussed in this paper. We propose a calibrated method based on the comparison of image patches expected to distinguish sudden appearing/vanishing fluorescent spots from other motion behaviors such as lateral movements. We analyze the performances of two statistical control procedures and compare the proposed approach to a frame difference approach using the same controls on a benchmark of synthetic image sequences. We have then selected a molecular model related to membrane trafficking and considered real image sequences obtained in cells stably expressing an endocytic-recycling trans-membrane protein, the Langerin-YFP, for validation. With this model, we targeted the efficient detection of fast and transient local fluorescence concentration arising in image sequences from a data base provided by two different microscopy modalities, wide field (WF) video microscopy using maximum intensity projection along the axial direction and total internal reflection fluorescence microscopy. Finally, the proposed detection method is briefly used to statistically explore the effect of several perturbations on the rate of transient events detected on the pilot biological model. PMID:20976222
Multiscale study for stochastic characterization of shale samples
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman; Javadpour, Farzam; Sahimi, Muhammad; Piri, Mohammad
2016-03-01
Characterization of shale reservoirs, which are typically of low permeability, is very difficult because of the presence of multiscale structures. While three-dimensional (3D) imaging can be an ultimate solution for revealing important complexities of such reservoirs, acquiring such images is costly and time consuming. On the other hand, high-quality 2D images, which are widely available, also reveal useful information about shales' pore connectivity and size. Most of the current modeling methods that are based on 2D images use limited and insufficient extracted information. One remedy to the shortcoming is direct use of qualitative images, a concept that we introduce in this paper. We demonstrate that higher-order statistics (as opposed to the traditional two-point statistics, such as variograms) are necessary for developing an accurate model of shales, and describe an efficient method for using 2D images that is capable of utilizing qualitative and physical information within an image and generating stochastic realizations of shales. We then further refine the model by describing and utilizing several techniques, including an iterative framework, for removing some possible artifacts and better pattern reproduction. Next, we introduce a new histogram-matching algorithm that accounts for concealed nanostructures in shale samples. We also present two new multiresolution and multiscale approaches for dealing with distinct pore structures that are common in shale reservoirs. In the multiresolution method, the original high-quality image is upscaled in a pyramid-like manner in order to achieve more accurate global and long-range structures. The multiscale approach integrates two images, each containing diverse pore networks - the nano- and microscale pores - using a high-resolution image representing small-scale pores and, at the same time, reconstructing large pores using a low-quality image. Eventually, the results are integrated to generate a 3D model. The methods are tested on two shale samples for which full 3D samples are available. The quantitative accuracy of the models is demonstrated by computing their morphological and flow properties and comparing them with those of the actual 3D images. The success of the method hinges upon the use of very different low- and high-resolution images.
Photon Limited Images and Their Restoration
1976-03-01
arises from noise inherent in the detected image data. In the first part of this report a model is developed which can be used to mathematically and...statistically describe an image detected at low light levels. This rodel serves to clarify some basic properties of photon noise , and provides a basis...for the analysi.s of image restoration. In the second part the problem of linear least-square restoration of imagery limited by photon noise is
NASA Astrophysics Data System (ADS)
Dang, H.; Stayman, J. W.; Sisniega, A.; Xu, J.; Zbijewski, W.; Wang, X.; Foos, D. H.; Aygun, N.; Koliatsos, V. E.; Siewerdsen, J. H.
2015-08-01
Non-contrast CT reliably detects fresh blood in the brain and is the current front-line imaging modality for intracranial hemorrhage such as that occurring in acute traumatic brain injury (contrast ~40-80 HU, size > 1 mm). We are developing flat-panel detector (FPD) cone-beam CT (CBCT) to facilitate such diagnosis in a low-cost, mobile platform suitable for point-of-care deployment. Such a system may offer benefits in the ICU, urgent care/concussion clinic, ambulance, and sports and military theatres. However, current FPD-CBCT systems face significant challenges that confound low-contrast, soft-tissue imaging. Artifact correction can overcome major sources of bias in FPD-CBCT but imparts noise amplification in filtered backprojection (FBP). Model-based reconstruction improves soft-tissue image quality compared to FBP by leveraging a high-fidelity forward model and image regularization. In this work, we develop a novel penalized weighted least-squares (PWLS) image reconstruction method with a noise model that includes accurate modeling of the noise characteristics associated with the two dominant artifact corrections (scatter and beam-hardening) in CBCT and utilizes modified weights to compensate for noise amplification imparted by each correction. Experiments included real data acquired on a FPD-CBCT test-bench and an anthropomorphic head phantom emulating intra-parenchymal hemorrhage. The proposed PWLS method demonstrated superior noise-resolution tradeoffs in comparison to FBP and PWLS with conventional weights (viz. at matched 0.50 mm spatial resolution, CNR = 11.9 compared to CNR = 5.6 and CNR = 9.9, respectively) and substantially reduced image noise especially in challenging regions such as skull base. The results support the hypothesis that with high-fidelity artifact correction and statistical reconstruction using an accurate post-artifact-correction noise model, FPD-CBCT can achieve image quality allowing reliable detection of intracranial hemorrhage.
Toward statistical modeling of saccadic eye-movement and visual saliency.
Sun, Xiaoshuai; Yao, Hongxun; Ji, Rongrong; Liu, Xian-Ming
2014-11-01
In this paper, we present a unified statistical framework for modeling both saccadic eye movements and visual saliency. By analyzing the statistical properties of human eye fixations on natural images, we found that human attention is sparsely distributed and usually deployed to locations with abundant structural information. This observations inspired us to model saccadic behavior and visual saliency based on super-Gaussian component (SGC) analysis. Our model sequentially obtains SGC using projection pursuit, and generates eye movements by selecting the location with maximum SGC response. Besides human saccadic behavior simulation, we also demonstrated our superior effectiveness and robustness over state-of-the-arts by carrying out dense experiments on synthetic patterns and human eye fixation benchmarks. Multiple key issues in saliency modeling research, such as individual differences, the effects of scale and blur, are explored in this paper. Based on extensive qualitative and quantitative experimental results, we show promising potentials of statistical approaches for human behavior research.
A data colocation grid framework for big data medical image processing: backend design
NASA Astrophysics Data System (ADS)
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop and HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design.
Bao, Shunxing; Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A
2018-03-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework's performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available.
A Data Colocation Grid Framework for Big Data Medical Image Processing: Backend Design
Huo, Yuankai; Parvathaneni, Prasanna; Plassard, Andrew J.; Bermudez, Camilo; Yao, Yuang; Lyu, Ilwoo; Gokhale, Aniruddha; Landman, Bennett A.
2018-01-01
When processing large medical imaging studies, adopting high performance grid computing resources rapidly becomes important. We recently presented a "medical image processing-as-a-service" grid framework that offers promise in utilizing the Apache Hadoop ecosystem and HBase for data colocation by moving computation close to medical image storage. However, the framework has not yet proven to be easy to use in a heterogeneous hardware environment. Furthermore, the system has not yet validated when considering variety of multi-level analysis in medical imaging. Our target design criteria are (1) improving the framework’s performance in a heterogeneous cluster, (2) performing population based summary statistics on large datasets, and (3) introducing a table design scheme for rapid NoSQL query. In this paper, we present a heuristic backend interface application program interface (API) design for Hadoop & HBase for Medical Image Processing (HadoopBase-MIP). The API includes: Upload, Retrieve, Remove, Load balancer (for heterogeneous cluster) and MapReduce templates. A dataset summary statistic model is discussed and implemented by MapReduce paradigm. We introduce a HBase table scheme for fast data query to better utilize the MapReduce model. Briefly, 5153 T1 images were retrieved from a university secure, shared web database and used to empirically access an in-house grid with 224 heterogeneous CPU cores. Three empirical experiments results are presented and discussed: (1) load balancer wall-time improvement of 1.5-fold compared with a framework with built-in data allocation strategy, (2) a summary statistic model is empirically verified on grid framework and is compared with the cluster when deployed with a standard Sun Grid Engine (SGE), which reduces 8-fold of wall clock time and 14-fold of resource time, and (3) the proposed HBase table scheme improves MapReduce computation with 7 fold reduction of wall time compare with a naïve scheme when datasets are relative small. The source code and interfaces have been made publicly available. PMID:29887668
Multivariate Strategies in Functional Magnetic Resonance Imaging
ERIC Educational Resources Information Center
Hansen, Lars Kai
2007-01-01
We discuss aspects of multivariate fMRI modeling, including the statistical evaluation of multivariate models and means for dimensional reduction. In a case study we analyze linear and non-linear dimensional reduction tools in the context of a "mind reading" predictive multivariate fMRI model.
Idris A, Elbakri; Fessler, Jeffrey A
2003-08-07
This paper describes a statistical image reconstruction method for x-ray CT that is based on a physical model that accounts for the polyenergetic x-ray source spectrum and the measurement nonlinearities caused by energy-dependent attenuation. Unlike our earlier work, the proposed algorithm does not require pre-segmentation of the object into the various tissue classes (e.g., bone and soft tissue) and allows mixed pixels. The attenuation coefficient of each voxel is modelled as the product of its unknown density and a weighted sum of energy-dependent mass attenuation coefficients. We formulate a penalized-likelihood function for this polyenergetic model and develop an iterative algorithm for estimating the unknown density of each voxel. Applying this method to simulated x-ray CT measurements of objects containing both bone and soft tissue yields images with significantly reduced beam hardening artefacts relative to conventional beam hardening correction methods. We also apply the method to real data acquired from a phantom containing various concentrations of potassium phosphate solution. The algorithm reconstructs an image with accurate density values for the different concentrations, demonstrating its potential for quantitative CT applications.
Event time analysis of longitudinal neuroimage data.
Sabuncu, Mert R; Bernal-Rusiel, Jorge L; Reuter, Martin; Greve, Douglas N; Fischl, Bruce
2014-08-15
This paper presents a method for the statistical analysis of the associations between longitudinal neuroimaging measurements, e.g., of cortical thickness, and the timing of a clinical event of interest, e.g., disease onset. The proposed approach consists of two steps, the first of which employs a linear mixed effects (LME) model to capture temporal variation in serial imaging data. The second step utilizes the extended Cox regression model to examine the relationship between time-dependent imaging measurements and the timing of the event of interest. We demonstrate the proposed method both for the univariate analysis of image-derived biomarkers, e.g., the volume of a structure of interest, and the exploratory mass-univariate analysis of measurements contained in maps, such as cortical thickness and gray matter density. The mass-univariate method employs a recently developed spatial extension of the LME model. We applied our method to analyze structural measurements computed using FreeSurfer, a widely used brain Magnetic Resonance Image (MRI) analysis software package. We provide a quantitative and objective empirical evaluation of the statistical performance of the proposed method on longitudinal data from subjects suffering from Mild Cognitive Impairment (MCI) at baseline. Copyright © 2014 Elsevier Inc. All rights reserved.
IEEE International Symposium on Biomedical Imaging.
2017-01-01
The IEEE International Symposium on Biomedical Imaging (ISBI) is a scientific conference dedicated to mathematical, algorithmic, and computational aspects of biological and biomedical imaging, across all scales of observation. It fosters knowledge transfer among different imaging communities and contributes to an integrative approach to biomedical imaging. ISBI is a joint initiative from the IEEE Signal Processing Society (SPS) and the IEEE Engineering in Medicine and Biology Society (EMBS). The 2018 meeting will include tutorials, and a scientific program composed of plenary talks, invited special sessions, challenges, as well as oral and poster presentations of peer-reviewed papers. High-quality papers are requested containing original contributions to the topics of interest including image formation and reconstruction, computational and statistical image processing and analysis, dynamic imaging, visualization, image quality assessment, and physical, biological, and statistical modeling. Accepted 4-page regular papers will be published in the symposium proceedings published by IEEE and included in IEEE Xplore. To encourage attendance by a broader audience of imaging scientists and offer additional presentation opportunities, ISBI 2018 will continue to have a second track featuring posters selected from 1-page abstract submissions without subsequent archival publication.
Applications of geostatistics and Markov models for logo recognition
NASA Astrophysics Data System (ADS)
Pham, Tuan
2003-01-01
Spatial covariances based on geostatistics are extracted as representative features of logo or trademark images. These spatial covariances are different from other statistical features for image analysis in that the structural information of an image is independent of the pixel locations and represented in terms of spatial series. We then design a classifier in the sense of hidden Markov models to make use of these geostatistical sequential data to recognize the logos. High recognition rates are obtained from testing the method against a public-domain logo database.
NASA Astrophysics Data System (ADS)
Chulichkov, Alexey I.; Nikitin, Stanislav V.; Emilenko, Alexander S.; Medvedev, Andrey P.; Postylyakov, Oleg V.
2017-10-01
Earlier, we developed a method for estimating the height and speed of clouds from cloud images obtained by a pair of digital cameras. The shift of a fragment of the cloud in the right frame relative to its position in the left frame is used to estimate the height of the cloud and its velocity. This shift is estimated by the method of the morphological analysis of images. However, this method requires that the axes of the cameras are parallel. Instead of real adjustment of the axes, we use virtual camera adjustment, namely, a transformation of a real frame, the result of which could be obtained if all the axes were perfectly adjusted. For such adjustment, images of stars as infinitely distant objects were used: on perfectly aligned cameras, images on both the right and left frames should be identical. In this paper, we investigate in more detail possible mathematical models of cloud image deformations caused by the misalignment of the axes of two cameras, as well as their lens aberration. The simplest model follows the paraxial approximation of lens (without lens aberrations) and reduces to an affine transformation of the coordinates of one of the frames. The other two models take into account the lens distortion of the 3rd and 3rd and 5th orders respectively. It is shown that the models differ significantly when converting coordinates near the edges of the frame. Strict statistical criteria allow choosing the most reliable model, which is as much as possible consistent with the measurement data. Further, each of these three models was used to determine parameters of the image deformations. These parameters are used to provide cloud images to mean what they would have when measured using an ideal setup, and then the distance to cloud is calculated. The results were compared with data of a laser range finder.
Gravitational lensing by a smoothly variable surface mass density
NASA Technical Reports Server (NTRS)
Paczynski, Bohdan; Wambsganss, Joachim
1989-01-01
The statistical properties of gravitational lensing due to smooth but nonuniform distributions of matter are considered. It is found that a majority of triple images had a parity characteristic for 'shear-induced' lensing. Almost all cases of triple or multiple imaging were associated with large surface density enhancements, and lensing objects were present between the images. Thus, the observed gravitational lens candidates for which no lensing object has been detected between the images are unlikely to be a result of asymmetric distribution of mass external to the image circle. In a model with smoothly variable surface mass density, moderately and highly amplified images tended to be single rather than multiple. An opposite trend was found in models which had singularities in the surface mass distribution.
Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study
Bornschein, Jörg; Henniges, Marc; Lücke, Jörg
2013-01-01
Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938
Visual attention based bag-of-words model for image classification
NASA Astrophysics Data System (ADS)
Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che
2014-04-01
Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.
Tsai, Tsung-Yuan; Li, Jing-Sheng; Wang, Shaobai; Li, Pingyue; Kwon, Young-Min; Li, Guoan
2015-01-01
The statistical shape model (SSM) method that uses 2D images of the knee joint to predict the three-dimensional (3D) joint surface model has been reported in the literature. In this study, we constructed a SSM database using 152 human computed tomography (CT) knee joint models, including the femur, tibia and patella and analysed the characteristics of each principal component of the SSM. The surface models of two in vivo knees were predicted using the SSM and their 2D bi-plane fluoroscopic images. The predicted models were compared to their CT joint models. The differences between the predicted 3D knee joint surfaces and the CT image-based surfaces were 0.30 ± 0.81 mm, 0.34 ± 0.79 mm and 0.36 ± 0.59 mm for the femur, tibia and patella, respectively (average ± standard deviation). The computational time for each bone of the knee joint was within 30 s using a personal computer. The analysis of this study indicated that the SSM method could be a useful tool to construct 3D surface models of the knee with sub-millimeter accuracy in real time. Thus, it may have a broad application in computer-assisted knee surgeries that require 3D surface models of the knee.
Computational Motion Phantoms and Statistical Models of Respiratory Motion
NASA Astrophysics Data System (ADS)
Ehrhardt, Jan; Klinder, Tobias; Lorenz, Cristian
Breathing motion is not a robust and 100 % reproducible process, and inter- and intra-fractional motion variations form an important problem in radiotherapy of the thorax and upper abdomen. A widespread consensus nowadays exists that it would be useful to use prior knowledge about respiratory organ motion and its variability to improve radiotherapy planning and treatment delivery. This chapter discusses two different approaches to model the variability of respiratory motion. In the first part, we review computational motion phantoms, i.e. computerized anatomical and physiological models. Computational phantoms are excellent tools to simulate and investigate the effects of organ motion in radiation therapy and to gain insight into methods for motion management. The second part of this chapter discusses statistical modeling techniques to describe the breathing motion and its variability in a population of 4D images. Population-based models can be generated from repeatedly acquired 4D images of the same patient (intra-patient models) and from 4D images of different patients (inter-patient models). The generation of those models is explained and possible applications of those models for motion prediction in radiotherapy are exemplified. Computational models of respiratory motion and motion variability have numerous applications in radiation therapy, e.g. to understand motion effects in simulation studies, to develop and evaluate treatment strategies or to introduce prior knowledge into the patient-specific treatment planning.
Statistical distributions of ultra-low dose CT sinograms and their fundamental limits
NASA Astrophysics Data System (ADS)
Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream
Statistical Modeling of Retinal Optical Coherence Tomography.
Amini, Zahra; Rabbani, Hossein
2016-06-01
In this paper, a new model for retinal Optical Coherence Tomography (OCT) images is proposed. This statistical model is based on introducing a nonlinear Gaussianization transform to convert the probability distribution function (pdf) of each OCT intra-retinal layer to a Gaussian distribution. The retina is a layered structure and in OCT each of these layers has a specific pdf which is corrupted by speckle noise, therefore a mixture model for statistical modeling of OCT images is proposed. A Normal-Laplace distribution, which is a convolution of a Laplace pdf and Gaussian noise, is proposed as the distribution of each component of this model. The reason for choosing Laplace pdf is the monotonically decaying behavior of OCT intensities in each layer for healthy cases. After fitting a mixture model to the data, each component is gaussianized and all of them are combined by Averaged Maximum A Posterior (AMAP) method. To demonstrate the ability of this method, a new contrast enhancement method based on this statistical model is proposed and tested on thirteen healthy 3D OCTs taken by the Topcon 3D OCT and five 3D OCTs from Age-related Macular Degeneration (AMD) patients, taken by Zeiss Cirrus HD-OCT. Comparing the results with two contending techniques, the prominence of the proposed method is demonstrated both visually and numerically. Furthermore, to prove the efficacy of the proposed method for a more direct and specific purpose, an improvement in the segmentation of intra-retinal layers using the proposed contrast enhancement method as a preprocessing step, is demonstrated.
TU-A-17A-02: In Memoriam of Ben Galkin: Virtual Tools for Validation of X-Ray Breast Imaging Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, K; Bakic, P; Abbey, C
2014-06-15
This symposium will explore simulation methods for the preclinical evaluation of novel 3D and 4D x-ray breast imaging systems – the subject of AAPM taskgroup TG234. Given the complex design of modern imaging systems, simulations offer significant advantages over long and costly clinical studies in terms of reproducibility, reduced radiation exposures, a known reference standard, and the capability for studying patient and disease subpopulations through appropriate choice of simulation parameters. Our focus will be on testing the realism of software anthropomorphic phantoms and virtual clinical trials tools developed for the optimization and validation of breast imaging systems. The symposium willmore » review the stateof- the-science, as well as the advantages and limitations of various approaches to testing realism of phantoms and simulated breast images. Approaches based upon the visual assessment of synthetic breast images by expert observers will be contrasted with approaches based upon comparing statistical properties between synthetic and clinical images. The role of observer models in the assessment of realism will be considered. Finally, an industry perspective will be presented, summarizing the role and importance of virtual tools and simulation methods in product development. The challenges and conditions that must be satisfied in order for computational modeling and simulation to play a significantly increased role in the design and evaluation of novel breast imaging systems will be addressed. Learning Objectives: Review the state-of-the science in testing realism of software anthropomorphic phantoms and virtual clinical trials tools; Compare approaches based upon the visual assessment by expert observers vs. the analysis of statistical properties of synthetic images; Discuss the role of observer models in the assessment of realism; Summarize the industry perspective to virtual methods for breast imaging.« less
NASA Astrophysics Data System (ADS)
Panayiotou, M.; King, A. P.; Ma, Y.; Housden, R. J.; Rinaldi, C. A.; Gill, J.; Cooklin, M.; O'Neill, M.; Rhode, K. S.
2013-11-01
The motion and deformation of catheters that lie inside cardiac structures can provide valuable information about the motion of the heart. In this paper we describe the formation of a novel statistical model of the motion of a coronary sinus (CS) catheter based on principal component analysis of tracked electrode locations from standard mono-plane x-ray fluoroscopy images. We demonstrate the application of our model for the purposes of retrospective cardiac and respiratory gating of x-ray fluoroscopy images in normal dose x-ray fluoroscopy images, and demonstrate how a modification of the technique allows application to very low dose scenarios. We validated our method on ten mono-plane imaging sequences comprising a total of 610 frames from ten different patients undergoing radiofrequency ablation for the treatment of atrial fibrillation. For normal dose images we established systole, end-inspiration and end-expiration gating with success rates of 100%, 92.1% and 86.9%, respectively. For very low dose applications, the method was tested on the same ten mono-plane x-ray fluoroscopy sequences without noise and with added noise at signal to noise ratio (SNR) values of √50, √10, √8, √6, √5, √2 and √1 to simulate the image quality of increasingly lower dose x-ray images. The method was able to detect the CS catheter even in the lowest SNR images with median errors not exceeding 2.6 mm per electrode. Furthermore, gating success rates of 100%, 71.4% and 85.7% were achieved at the low SNR value of √2, representing a dose reduction of more than 25 times. Thus, the technique has the potential to extract useful information whilst substantially reducing the radiation exposure.
Wang, Shiying; Herbst, Elizabeth B.; Mauldin, F. William; Diakova, Galina B.; Klibanov, Alexander L.; Hossack, John A.
2016-01-01
Objectives The objective of this study is to evaluate the minimum microbubble dose for ultrasound molecular imaging to achieve statistically significant detection of angiogenesis in a mouse model. Materials and Methods The pre-burst minus post-burst method was implemented on a Verasonics ultrasound research scanner using a multi-frame compounding pulse inversion imaging sequence. Biotinylated lipid (distearoyl phosphatidylcholine, DSPC-based) microbubbles that were conjugated with anti-vascular endothelial growth factor 2 (VEGFR2) antibody (MBVEGFR2) or isotype control antibody (MBControl) were injected into mice carrying adenocarcinoma xenografts. Different injection doses ranging from 5 × 104 to 1 × 107 microbubbles per mouse were evaluated to determine the minimum diagnostically effective dose. Results The proposed imaging sequence was able to achieve statistically significant detection (p < 0.05, n = 5) of VEGFR2 in tumors with a minimum MBVEGFR2 injection dose of only 5 × 104 microbubbles per mouse (DSPC at 0.053 ng/g mouse body mass). Non-specific adhesion of MBControl at the same injection dose was negligible. Additionally, the targeted contrast ultrasound signal of MBVEGFR2 decreased with lower microbubble doses, while non-specific adhesion of MBControl increased with higher microbubble doses. Conclusions 5 × 104 microbubbles per animal is now the lowest injection dose on record for ultrasound molecular imaging to achieve statistically significant detection of molecular targets in vivo. Findings in this study provide us with further guidance for future developments of clinically translatable ultrasound molecular imaging applications using a lower dose of microbubbles. PMID:27654582
Modeling global scene factors in attention
NASA Astrophysics Data System (ADS)
Torralba, Antonio
2003-07-01
Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America
A Framework to Learn Physics from Atomically Resolved Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlcek, L.; Maksov, A.; Pan, M.
Here, we present a generalized framework for physics extraction, i.e., knowledge, from atomically resolved images, and show its utility by applying it to a model system of segregation of chalcogen atoms in an FeSe 0.45Te 0.55 superconductor system. We emphasize that the framework can be used for any imaging data for which a generative physical model exists. Consider that a generative physical model can produce a very large number of configurations, not all of which are observable. By applying a microscope function to a sub-set of this generated data, we form a simulated dataset on which statistics can be computed.
Statistical virtual eye model based on wavefront aberration
Wang, Jie-Mei; Liu, Chun-Ling; Luo, Yi-Ning; Liu, Yi-Guang; Hu, Bing-Jie
2012-01-01
Wavefront aberration affects the quality of retinal image directly. This paper reviews the representation and reconstruction of wavefront aberration, as well as the construction of virtual eye model based on Zernike polynomial coefficients. In addition, the promising prospect of virtual eye model is emphasized. PMID:23173112
Teshima, Tara Lynn; Patel, Vaibhav; Mainprize, James G; Edwards, Glenn; Antonyshyn, Oleh M
2015-07-01
The utilization of three-dimensional modeling technology in craniomaxillofacial surgery has grown exponentially during the last decade. Future development, however, is hindered by the lack of a normative three-dimensional anatomic dataset and a statistical mean three-dimensional virtual model. The purpose of this study is to develop and validate a protocol to generate a statistical three-dimensional virtual model based on a normative dataset of adult skulls. Two hundred adult skull CT images were reviewed. The average three-dimensional skull was computed by processing each CT image in the series using thin-plate spline geometric morphometric protocol. Our statistical average three-dimensional skull was validated by reconstructing patient-specific topography in cranial defects. The experiment was repeated 4 times. In each case, computer-generated cranioplasties were compared directly to the original intact skull. The errors describing the difference between the prediction and the original were calculated. A normative database of 33 adult human skulls was collected. Using 21 anthropometric landmark points, a protocol for three-dimensional skull landmarking and data reduction was developed and a statistical average three-dimensional skull was generated. Our results show the root mean square error (RMSE) for restoration of a known defect using the native best match skull, our statistical average skull, and worst match skull was 0.58, 0.74, and 4.4 mm, respectively. The ability to statistically average craniofacial surface topography will be a valuable instrument for deriving missing anatomy in complex craniofacial defects and deficiencies as well as in evaluating morphologic results of surgery.
a Method of Time-Series Change Detection Using Full Polsar Images from Different Sensors
NASA Astrophysics Data System (ADS)
Liu, W.; Yang, J.; Zhao, J.; Shi, H.; Yang, L.
2018-04-01
Most of the existing change detection methods using full polarimetric synthetic aperture radar (PolSAR) are limited to detecting change between two points in time. In this paper, a novel method was proposed to detect the change based on time-series data from different sensors. Firstly, the overall difference image of a time-series PolSAR was calculated by ominous statistic test. Secondly, difference images between any two images in different times ware acquired by Rj statistic test. Generalized Gaussian mixture model (GGMM) was used to obtain time-series change detection maps in the last step for the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection by using the time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can detect the time-series change from different sensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graff, C.
This symposium will review recent advances in the simulation methods for evaluation of novel breast imaging systems – the subject of AAPM Task Group TG234. Our focus will be on the various approaches to development and validation of software anthropomorphic phantoms and their use in the statistical assessment of novel imaging systems using such phantoms along with computational models for the x-ray image formation process. Due to the dynamic development and complex design of modern medical imaging systems, the simulation of anatomical structures, image acquisition modalities, and the image perception and analysis offers substantial benefits of reduced cost, duration, andmore » radiation exposure, as well as the known ground-truth and wide variability in simulated anatomies. For these reasons, Virtual Clinical Trials (VCTs) have been increasingly accepted as a viable tool for preclinical assessment of x-ray and other breast imaging methods. Activities of TG234 have encompassed the optimization of protocols for simulation studies, including phantom specifications, the simulated data representation, models of the imaging process, and statistical assessment of simulated images. The symposium will discuss the state-of-the-science of VCTs for novel breast imaging systems, emphasizing recent developments and future directions. Presentations will discuss virtual phantoms for intermodality breast imaging performance comparisons, extension of the breast anatomy simulation to the cellular level, optimized integration of the simulated imaging chain, and the novel directions in the observer models design. Learning Objectives: Review novel results in developing and applying virtual phantoms for inter-modality breast imaging performance comparisons; Discuss the efforts to extend the computer simulation of breast anatomy and pathology to the cellular level; Summarize the state of the science in optimized integration of modules in the simulated imaging chain; Compare novel directions in the design of observer models for task based validation of imaging systems. PB: Research funding support from the NIH, NSF, and Komen for the Cure; NIH funded collaboration with Barco, Inc. and Hologic, Inc.; Consultant to Delaware State Univ. and NCCPM, UK. AA: Employed at Barco Healthcare.; P. Bakic, NIH: (NIGMS P20 #GM103446, NCI R01 #CA154444); M. Das, NIH Research grants.« less
A data compression technique for synthetic aperture radar images
NASA Technical Reports Server (NTRS)
Frost, V. S.; Minden, G. J.
1986-01-01
A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable rate modification of block truncation coding (BTC). A data rate of approximately 1.6 bit/pixel is achieved with the technique while maintaining the image quality and cultural (pointlike) targets. The algorithm requires no large data storage and is computationally simple.
Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni
2013-12-01
To evaluate dose reduction and image quality of abdominopelvic computed tomography (CT) reconstructed with model-based iterative reconstruction (MBIR) compared to adaptive statistical iterative reconstruction (ASIR). In this prospective study, 85 patients underwent referential-, low-, and ultralow-dose unenhanced abdominopelvic CT. Images were reconstructed with ASIR for low-dose (L-ASIR) and ultralow-dose CT (UL-ASIR), and with MBIR for ultralow-dose CT (UL-MBIR). Image noise was measured in the abdominal aorta and iliopsoas muscle. Subjective image analyses and a lesion detection study (adrenal nodules) were conducted by two blinded radiologists. A reference standard was established by a consensus panel of two different radiologists using referential-dose CT reconstructed with filtered back projection. Compared to low-dose CT, there was a 63% decrease in dose-length product with ultralow-dose CT. UL-MBIR had significantly lower image noise than L-ASIR and UL-ASIR (all p<0.01). UL-MBIR was significantly better for subjective image noise and streak artifacts than L-ASIR and UL-ASIR (all p<0.01). There were no significant differences between UL-MBIR and L-ASIR in diagnostic acceptability (p>0.65), or diagnostic performance for adrenal nodules (p>0.87). MBIR significantly improves image noise and streak artifacts compared to ASIR, and can achieve radiation dose reduction without severely compromising image quality.
Image Statistics and the Representation of Material Properties in the Visual Cortex
Baumgartner, Elisabeth; Gegenfurtner, Karl R.
2016-01-01
We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images. PMID:27582714
Image Statistics and the Representation of Material Properties in the Visual Cortex.
Baumgartner, Elisabeth; Gegenfurtner, Karl R
2016-01-01
We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images.
Xiao, Qingtai; Xu, Jianxin; Wang, Hua
2016-08-16
A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target.
Shin, S M; Kim, Y-I; Choi, Y-S; Yamaguchi, T; Maki, K; Cho, B-H; Park, S-B
2015-01-01
To evaluate axial cervical vertebral (ACV) shape quantitatively and to build a prediction model for skeletal maturation level using statistical shape analysis for Japanese individuals. The sample included 24 female and 19 male patients with hand-wrist radiographs and CBCT images. Through generalized Procrustes analysis and principal components (PCs) analysis, the meaningful PCs were extracted from each ACV shape and analysed for the estimation regression model. Each ACV shape had meaningful PCs, except for the second axial cervical vertebra. Based on these models, the smallest prediction intervals (PIs) were from the combination of the shape space PCs, age and gender. Overall, the PIs of the male group were smaller than those of the female group. There was no significant correlation between centroid size as a size factor and skeletal maturation level. Our findings suggest that the ACV maturation method, which was applied by statistical shape analysis, could confirm information about skeletal maturation in Japanese individuals as an available quantifier of skeletal maturation and could be as useful a quantitative method as the skeletal maturation index.
Shin, S M; Choi, Y-S; Yamaguchi, T; Maki, K; Cho, B-H; Park, S-B
2015-01-01
Objectives: To evaluate axial cervical vertebral (ACV) shape quantitatively and to build a prediction model for skeletal maturation level using statistical shape analysis for Japanese individuals. Methods: The sample included 24 female and 19 male patients with hand–wrist radiographs and CBCT images. Through generalized Procrustes analysis and principal components (PCs) analysis, the meaningful PCs were extracted from each ACV shape and analysed for the estimation regression model. Results: Each ACV shape had meaningful PCs, except for the second axial cervical vertebra. Based on these models, the smallest prediction intervals (PIs) were from the combination of the shape space PCs, age and gender. Overall, the PIs of the male group were smaller than those of the female group. There was no significant correlation between centroid size as a size factor and skeletal maturation level. Conclusions: Our findings suggest that the ACV maturation method, which was applied by statistical shape analysis, could confirm information about skeletal maturation in Japanese individuals as an available quantifier of skeletal maturation and could be as useful a quantitative method as the skeletal maturation index. PMID:25411713
Xiao, Qingtai; Xu, Jianxin; Wang, Hua
2016-01-01
A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target. PMID:27527065
NASA Astrophysics Data System (ADS)
Zhao, Runchen; Ientilucci, Emmett J.
2017-05-01
Hyperspectral remote sensing systems provide spectral data composed of hundreds of narrow spectral bands. Spectral remote sensing systems can be used to identify targets, for example, without physical interaction. Often it is of interested to characterize the spectral variability of targets or objects. The purpose of this paper is to identify and characterize the LWIR spectral variability of targets based on an improved earth observing statistical performance model, known as the Forecasting and Analysis of Spectroradiometric System Performance (FASSP) model. FASSP contains three basic modules including a scene model, sensor model and a processing model. Instead of using mean surface reflectance only as input to the model, FASSP transfers user defined statistical characteristics of a scene through the image chain (i.e., from source to sensor). The radiative transfer model, MODTRAN, is used to simulate the radiative transfer based on user defined atmospheric parameters. To retrieve class emissivity and temperature statistics, or temperature / emissivity separation (TES), a LWIR atmospheric compensation method is necessary. The FASSP model has a method to transform statistics in the visible (ie., ELM) but currently does not have LWIR TES algorithm in place. This paper addresses the implementation of such a TES algorithm and its associated transformation of statistics.
CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation
Wilke, Marko; Altaye, Mekibib; Holland, Scott K.
2017-01-01
Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating “unusual” populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php. PMID:28275348
CerebroMatic: A Versatile Toolbox for Spline-Based MRI Template Creation.
Wilke, Marko; Altaye, Mekibib; Holland, Scott K
2017-01-01
Brain image spatial normalization and tissue segmentation rely on prior tissue probability maps. Appropriately selecting these tissue maps becomes particularly important when investigating "unusual" populations, such as young children or elderly subjects. When creating such priors, the disadvantage of applying more deformation must be weighed against the benefit of achieving a crisper image. We have previously suggested that statistically modeling demographic variables, instead of simply averaging images, is advantageous. Both aspects (more vs. less deformation and modeling vs. averaging) were explored here. We used imaging data from 1914 subjects, aged 13 months to 75 years, and employed multivariate adaptive regression splines to model the effects of age, field strength, gender, and data quality. Within the spm/cat12 framework, we compared an affine-only with a low- and a high-dimensional warping approach. As expected, more deformation on the individual level results in lower group dissimilarity. Consequently, effects of age in particular are less apparent in the resulting tissue maps when using a more extensive deformation scheme. Using statistically-described parameters, high-quality tissue probability maps could be generated for the whole age range; they are consistently closer to a gold standard than conventionally-generated priors based on 25, 50, or 100 subjects. Distinct effects of field strength, gender, and data quality were seen. We conclude that an extensive matching for generating tissue priors may model much of the variability inherent in the dataset which is then not contained in the resulting priors. Further, the statistical description of relevant parameters (using regression splines) allows for the generation of high-quality tissue probability maps while controlling for known confounds. The resulting CerebroMatic toolbox is available for download at http://irc.cchmc.org/software/cerebromatic.php.
Integrated framework for developing search and discrimination metrics
NASA Astrophysics Data System (ADS)
Copeland, Anthony C.; Trivedi, Mohan M.
1997-06-01
This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.
Schuster, Christina; Elamin, Marwa; Hardiman, Orla; Bede, Peter
2015-10-01
Recent quantitative neuroimaging studies have been successful in capturing phenotype and genotype-specific changes in dementia syndromes, amyotrophic lateral sclerosis, Parkinson's disease and other neurodegenerative conditions. However, the majority of imaging studies are cross-sectional, despite the obvious superiority of longitudinal study designs in characterising disease trajectories, response to therapy, progression rates and evaluating the presymptomatic phase of neurodegenerative conditions. The aim of this work is to perform a systematic review of longitudinal imaging initiatives in neurodegeneration focusing on methodology, optimal statistical models, follow-up intervals, attrition rates, primary study outcomes and presymptomatic studies. Longitudinal imaging studies were identified from 'PubMed' and reviewed from 1990 to 2014. The search terms 'longitudinal', 'MRI', 'presymptomatic' and 'imaging' were utilised in combination with one of the following degenerative conditions; Alzheimer's disease, amyotrophic lateral sclerosis/motor neuron disease, frontotemporal dementia, Huntington's disease, multiple sclerosis, Parkinson's disease, ataxia, HIV, alcohol abuse/dependence. A total of 423 longitudinal imaging papers and 103 genotype-based presymptomatic studies were identified and systematically reviewed. Imaging techniques, follow-up intervals and attrition rates showed significant variation depending on the primary diagnosis. Commonly used statistical models included analysis of annualised percentage change, mixed and random effect models, and non-linear cumulative models with acceleration-deceleration components. Although longitudinal imaging studies have the potential to provide crucial insights into the presymptomatic phase and natural trajectory of neurodegenerative processes a standardised design is required to enable meaningful data interpretation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Limited data tomographic image reconstruction via dual formulation of total variation minimization
NASA Astrophysics Data System (ADS)
Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong
2011-03-01
The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.
Enhanced detection and visualization of anomalies in spectral imagery
NASA Astrophysics Data System (ADS)
Basener, William F.; Messinger, David W.
2009-05-01
Anomaly detection algorithms applied to hyperspectral imagery are able to reliably identify man-made objects from a natural environment based on statistical/geometric likelyhood. The process is more robust than target identification, which requires precise prior knowledge of the object of interest, but has an inherently higher false alarm rate. Standard anomaly detection algorithms measure deviation of pixel spectra from a parametric model (either statistical or linear mixing) estimating the image background. The topological anomaly detector (TAD) creates a fully non-parametric, graph theory-based, topological model of the image background and measures deviation from this background using codensity. In this paper we present a large-scale comparative test of TAD against 80+ targets in four full HYDICE images using the entire canonical target set for generation of ROC curves. TAD will be compared against several statistics-based detectors including local RX and subspace RX. Even a perfect anomaly detection algorithm would have a high practical false alarm rate in most scenes simply because the user/analyst is not interested in every anomalous object. To assist the analyst in identifying and sorting objects of interest, we investigate coloring of the anomalies with principle components projections using statistics computed from the anomalies. This gives a very useful colorization of anomalies in which objects of similar material tend to have the same color, enabling an analyst to quickly sort and identify anomalies of highest interest.
ERIC Educational Resources Information Center
Johnson, W.; Bouchard, T.J.
2005-01-01
In a heterogeneous sample of 436 adult individuals who completed 42 mental ability tests, we evaluated the relative statistical performance of three major psychometric models of human intelligence-the Cattell-Horn fluid-crystallized model, Vernon's verbal-perceptual model, and Carroll's three-strata model. The verbal-perceptual model fit…
The Müller-Lyer Illusion in a Computational Model of Biological Object Recognition
Zeman, Astrid; Obst, Oliver; Brooks, Kevin R.; Rich, Anina N.
2013-01-01
Studying illusions provides insight into the way the brain processes information. The Müller-Lyer Illusion (MLI) is a classical geometrical illusion of size, in which perceived line length is decreased by arrowheads and increased by arrowtails. Many theories have been put forward to explain the MLI, such as misapplied size constancy scaling, the statistics of image-source relationships and the filtering properties of signal processing in primary visual areas. Artificial models of the ventral visual processing stream allow us to isolate factors hypothesised to cause the illusion and test how these affect classification performance. We trained a feed-forward feature hierarchical model, HMAX, to perform a dual category line length judgment task (short versus long) with over 90% accuracy. We then tested the system in its ability to judge relative line lengths for images in a control set versus images that induce the MLI in humans. Results from the computational model show an overall illusory effect similar to that experienced by human subjects. No natural images were used for training, implying that misapplied size constancy and image-source statistics are not necessary factors for generating the illusion. A post-hoc analysis of response weights within a representative trained network ruled out the possibility that the illusion is caused by a reliance on information at low spatial frequencies. Our results suggest that the MLI can be produced using only feed-forward, neurophysiological connections. PMID:23457510
Fritscher, Karl; Grunerbl, Agnes; Hanni, Markus; Suhm, Norbert; Hengg, Clemens; Schubert, Rainer
2009-10-01
Currently, conventional X-ray and CT images as well as invasive methods performed during the surgical intervention are used to judge the local quality of a fractured proximal femur. However, these approaches are either dependent on the surgeon's experience or cannot assist diagnostic and planning tasks preoperatively. Therefore, in this work a method for the individual analysis of local bone quality in the proximal femur based on model-based analysis of CT- and X-ray images of femur specimen will be proposed. A combined representation of shape and spatial intensity distribution of an object and different statistical approaches for dimensionality reduction are used to create a statistical appearance model in order to assess the local bone quality in CT and X-ray images. The developed algorithms are tested and evaluated on 28 femur specimen. It will be shown that the tools and algorithms presented herein are highly adequate to automatically and objectively predict bone mineral density values as well as a biomechanical parameter of the bone that can be measured intraoperatively.
Dynamics Explorer guest investigator
NASA Technical Reports Server (NTRS)
Sojka, Jan J.
1991-01-01
A data base of satellite particle, electric field, image, and plasma data was used to determine correlations between the fields and the particle auroral boundaries. A data base of 8 days of excellent coverage from all instruments was completed. The geomagnetic conditions associated with each of the selected data periods, the number of UV image passes per study day that were obtained, and the total number of UV images for each day are given in tabular form. For each of the days listed in Table 1, both Vector Electric Field Instrument (VEFI) electric potential data and LAPI integrated particle energy fluxes were obtained. On the average, between 8 and 11 passes of useful data per day were obtained. These data are displayed in a format such that either the statistical electric field model potential or the statistical precipitation energy flux could be superimposed. The Heppner and Maynard (1987) and Hardy et al. (1987) models were used for the electric potential and precipitation, respectively. In addition, the auroral image intensity along the Dynamics Explorer-2 satellite pass could be computed and plotted along with the LAPI precipitation data and Hardy et al. (1987) values.
NASA Astrophysics Data System (ADS)
Van de Casteele, Elke; Parizel, Paul; Sijbers, Jan
2012-03-01
Adaptive statistical iterative reconstruction (ASiR) is a new reconstruction algorithm used in the field of medical X-ray imaging. This new reconstruction method combines the idealized system representation, as we know it from the standard Filtered Back Projection (FBP) algorithm, and the strength of iterative reconstruction by including a noise model in the reconstruction scheme. It studies how noise propagates through the reconstruction steps, feeds this model back into the loop and iteratively reduces noise in the reconstructed image without affecting spatial resolution. In this paper the effect of ASiR on the contrast to noise ratio is studied using the low contrast module of the Catphan phantom. The experiments were done on a GE LightSpeed VCT system at different voltages and currents. The results show reduced noise and increased contrast for the ASiR reconstructions compared to the standard FBP method. For the same contrast to noise ratio the images from ASiR can be obtained using 60% less current, leading to a reduction in dose of the same amount.
Hahn, Wolfram; Fricke-Zech, Susanne; Fialka-Fricke, Julia; Dullin, Christian; Zapf, Antonia; Gruber, Rudolf; Sennhenn-kirchner, Sabine; Kubein-Meesenburg, Dietmar; Sadat-Khonsari, Reza
2009-09-01
An investigation was conducted to compare the image quality of prototype flat-panel volume computed tomography (fpVCT) and multislice computed tomography (MSCT) of suture structures. Bone samples were taken from the midpalatal suture of 5 young (16 weeks) and 5 old (200 weeks) Sus scrofa domestica and fixed in formalin solution. An fpVCT prototype and an MSCT were used to obtain images of the specimens. The facial reformations were assessed by 4 observers using a 1 (excellent) to 5 (poor) rating scale for the weighted criteria visualization of the suture structure. A linear mixed model was used for statistical analysis. Results with P < .05 were considered to be statistically significant. The visualization of the suture of young specimens was significantly better than that of older animals (P < .001). The visualization of the suture with fpVCT was significantly better than that with MSCT (P < .001). Compared with MSCT, fpVCT produces superior results in the visualization of the midpalatal suture in a Sus scrofa domestica model.
Multiple Auto-Adapting Color Balancing for Large Number of Images
NASA Astrophysics Data System (ADS)
Zhou, X.
2015-04-01
This paper presents a powerful technology of color balance between images. It does not only work for small number of images but also work for unlimited large number of images. Multiple adaptive methods are used. To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. Some special objects such as water and snow are filtered by percentage cut or a given mask. Excellent results are achieved. The performance is extremely fast to support on-the-fly color balancing for large number of images (possible of hundreds of thousands images). Detailed algorithm and formulae are described. Rich examples including big mosaic datasets (e.g., contains 36,006 images) are given. Excellent results and performance are presented. The results show that this technology can be successfully used in various imagery to obtain color seamless mosaic. This algorithm has been successfully using in ESRI ArcGis.
Statistical processing of large image sequences.
Khellah, F; Fieguth, P; Murray, M J; Allen, M
2005-01-01
The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.
NASA Technical Reports Server (NTRS)
Davis, J. M.; Gerassimenko, M.; Krieger, A. S.; Vaiana, G. S.
1975-01-01
Simultaneous soft X-ray spectroscopic and broad-band imaging observations of an active region have been analyzed together to determine the parameters which describe the coronal plasma. From the spectroscopic data, models of temperature-emission measure-elemental abundance have been constructed which provide acceptable statistical fits. By folding these possible models through the imaging analysis, models which are not self-consistent can be rejected. In this way, only the oxygen, neon, and iron abundances of Pottasch (1967), combined with either an isothermal or exponential temperature-emission-measure model, are consistent with both sets of data. Contour maps of electron temperature and density for the active region have been constructed from the imaging data. The implications of the analysis for the determination of coronal abundances and for future satellite experiments are discussed.
An energy- and depth-dependent model for x-ray imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallas, Brandon D.; Boswell, Jonathan S.; Badano, Aldo
In this paper, we model an x-ray imaging system, paying special attention to the energy- and depth-dependent characteristics of the inputs and interactions: x rays are polychromatic, interaction depth and conversion to optical photons is energy-dependent, optical scattering and the collection efficiency depend on the depth of interaction. The model we construct is a random function of the point process that begins with the distribution of x rays incident on the phosphor and ends with optical photons being detected by the active area of detector pixels to form an image. We show how the point-process representation can be used tomore » calculate the characteristic statistics of the model. We then simulate a Gd{sub 2}O{sub 2}S:Tb phosphor, estimate its characteristic statistics, and proceed with a signal-detection experiment to investigate the impact of the pixel fill factor on detecting spherical calcifications (the signal). The two extremes possible from this experiment are that SNR{sup 2} does not change with fill factor or changes in proportion to fill factor. In our results, the impact of fill factor is between these extremes, and depends on the diameter of the signal.« less
Sun, X; Chen, K J; Berg, E P; Newman, D J; Schwartz, C A; Keller, W L; Maddock Carlin, K R
2014-02-01
The objective was to use digital color image texture features to predict troponin-T degradation in beef. Image texture features, including 88 gray level co-occurrence texture features, 81 two-dimension fast Fourier transformation texture features, and 48 Gabor wavelet filter texture features, were extracted from color images of beef strip steaks (longissimus dorsi, n = 102) aged for 10d obtained using a digital camera and additional lighting. Steaks were designated degraded or not-degraded based on troponin-T degradation determined on d 3 and d 10 postmortem by immunoblotting. Statistical analysis (STEPWISE regression model) and artificial neural network (support vector machine model, SVM) methods were designed to classify protein degradation. The d 3 and d 10 STEPWISE models were 94% and 86% accurate, respectively, while the d 3 and d 10 SVM models were 63% and 71%, respectively, in predicting protein degradation in aged meat. STEPWISE and SVM models based on image texture features show potential to predict troponin-T degradation in meat. © 2013.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noo, F; Guo, Z
2016-06-15
Purpose: Penalized-weighted least-square reconstruction has become an important research topic in CT, to reduce dose without affecting image quality. Two components impact image quality in this reconstruction: the statistical weights and the use of an edge-preserving penalty term. We are interested in assessing the influence of statistical weights on their own, without the edge-preserving feature. Methods: The influence of statistical weights on image quality was assessed in terms of low-contrast detail detection using LROC analysis. The task amounted to detect and localize a 6-mm lesion with random contrast inside the FORBILD head phantom. A two-alternative forced-choice experiment was used withmore » two human observers performing the task. Reconstructions without and with statistical weights were compared, both using the same quadratic penalty term. The beam energy was set to 30keV to amplify spatial differences in attenuation and thereby the role of statistical weights. A fan-beam data acquisition geometry was used. Results: Visual inspection of images clearly showed a difference in noise between the two reconstructions methods. As expected, the reconstruction without statistical weights exhibited noise streaks. The other reconstruction appeared better in this aspect, but presented other disturbing noise patterns and artifacts induced by the weights. The LROC analysis yield the following 95-percent confidence interval for the difference in reader-averaged AUC (reconstruction without weights minus reconstruction with weights): [0.0026,0.0599]. The mean AUC value was 0.9094. Conclusion: We have investigated the impact of statistical weights without the use of edge-preserving penalty in penalized weighted least-square reconstruction. A decrease rather than increase in image quality was observed when using statistical weights. Thus, the observers were better able to cope with the noise streaks than the noise patterns and artifacts induced by the statistical weights. It may be that different results would be obtained if the penalty term was used with a pixel-dependent weight. F Noo receives research support from Siemens Healthcare GmbH.« less
Shi, Ximin; Li, Nan; Ding, Haiyan; Dang, Yonghong; Hu, Guilan; Liu, Shuai; Cui, Jie; Zhang, Yue; Li, Fang; Zhang, Hui; Huo, Li
2018-01-01
Kinetic modeling of dynamic 11 C-acetate PET imaging provides quantitative information for myocardium assessment. The quality and quantitation of PET images are known to be dependent on PET reconstruction methods. This study aims to investigate the impacts of reconstruction algorithms on the quantitative analysis of dynamic 11 C-acetate cardiac PET imaging. Suspected alcoholic cardiomyopathy patients ( N = 24) underwent 11 C-acetate dynamic PET imaging after low dose CT scan. PET images were reconstructed using four algorithms: filtered backprojection (FBP), ordered subsets expectation maximization (OSEM), OSEM with time-of-flight (TOF), and OSEM with both time-of-flight and point-spread-function (TPSF). Standardized uptake values (SUVs) at different time points were compared among images reconstructed using the four algorithms. Time-activity curves (TACs) in myocardium and blood pools of ventricles were generated from the dynamic image series. Kinetic parameters K 1 and k 2 were derived using a 1-tissue-compartment model for kinetic modeling of cardiac flow from 11 C-acetate PET images. Significant image quality improvement was found in the images reconstructed using iterative OSEM-type algorithms (OSME, TOF, and TPSF) compared with FBP. However, no statistical differences in SUVs were observed among the four reconstruction methods at the selected time points. Kinetic parameters K 1 and k 2 also exhibited no statistical difference among the four reconstruction algorithms in terms of mean value and standard deviation. However, for the correlation analysis, OSEM reconstruction presented relatively higher residual in correlation with FBP reconstruction compared with TOF and TPSF reconstruction, and TOF and TPSF reconstruction were highly correlated with each other. All the tested reconstruction algorithms performed similarly for quantitative analysis of 11 C-acetate cardiac PET imaging. TOF and TPSF yielded highly consistent kinetic parameter results with superior image quality compared with FBP. OSEM was relatively less reliable. Both TOF and TPSF were recommended for cardiac 11 C-acetate kinetic analysis.
Statistical modeling of natural backgrounds in hyperspectral LWIR data
NASA Astrophysics Data System (ADS)
Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph
2016-09-01
Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.
Automatic stage identification of Drosophila egg chamber based on DAPI images
Jia, Dongyu; Xu, Qiuping; Xie, Qian; Mio, Washington; Deng, Wu-Min
2016-01-01
The Drosophila egg chamber, whose development is divided into 14 stages, is a well-established model for developmental biology. However, visual stage determination can be a tedious, subjective and time-consuming task prone to errors. Our study presents an objective, reliable and repeatable automated method for quantifying cell features and classifying egg chamber stages based on DAPI images. The proposed approach is composed of two steps: 1) a feature extraction step and 2) a statistical modeling step. The egg chamber features used are egg chamber size, oocyte size, egg chamber ratio and distribution of follicle cells. Methods for determining the on-site of the polytene stage and centripetal migration are also discussed. The statistical model uses linear and ordinal regression to explore the stage-feature relationships and classify egg chamber stages. Combined with machine learning, our method has great potential to enable discovery of hidden developmental mechanisms. PMID:26732176
Liu, Wensong; Yang, Jie; Zhao, Jinqi; Shi, Hongtao; Yang, Le
2018-02-12
The traditional unsupervised change detection methods based on the pixel level can only detect the changes between two different times with same sensor, and the results are easily affected by speckle noise. In this paper, a novel method is proposed to detect change based on time-series data from different sensors. Firstly, the overall difference image of the time-series PolSAR is calculated by omnibus test statistics, and difference images between any two images in different times are acquired by R j test statistics. Secondly, the difference images are segmented with a Generalized Statistical Region Merging (GSRM) algorithm which can suppress the effect of speckle noise. Generalized Gaussian Mixture Model (GGMM) is then used to obtain the time-series change detection maps in the final step of the proposed method. To verify the effectiveness of the proposed method, we carried out the experiment of change detection using time-series PolSAR images acquired by Radarsat-2 and Gaofen-3 over the city of Wuhan, in China. Results show that the proposed method can not only detect the time-series change from different sensors, but it can also better suppress the influence of speckle noise and improve the overall accuracy and Kappa coefficient.
NASA Astrophysics Data System (ADS)
Wanchuliak, O. Ya.; Peresunko, A. P.; Bakko, Bouzan Adel; Kushnerick, L. Ya.
2011-09-01
This paper presents the foundations of a large scale - localized wavelet - polarization analysis - inhomogeneous laser images of histological sections of myocardial tissue. Opportunities were identified defining relations between the structures of wavelet coefficients and causes of death. The optical model of polycrystalline networks of myocardium protein fibrils is presented. The technique of determining the coordinate distribution of polarization azimuth of the points of laser images of myocardium histological sections is suggested. The results of investigating the interrelation between the values of statistical (statistical moments of the 1st-4th order) parameters are presented which characterize distributions of wavelet - coefficients polarization maps of myocardium layers and death reasons.
Medical Image Retrieval: A Multimodal Approach
Cao, Yu; Steffey, Shawn; He, Jianbiao; Xiao, Degui; Tao, Cui; Chen, Ping; Müller, Henning
2014-01-01
Medical imaging is becoming a vital component of war on cancer. Tremendous amounts of medical image data are captured and recorded in a digital format during cancer care and cancer research. Facing such an unprecedented volume of image data with heterogeneous image modalities, it is necessary to develop effective and efficient content-based medical image retrieval systems for cancer clinical practice and research. While substantial progress has been made in different areas of content-based image retrieval (CBIR) research, direct applications of existing CBIR techniques to the medical images produced unsatisfactory results, because of the unique characteristics of medical images. In this paper, we develop a new multimodal medical image retrieval approach based on the recent advances in the statistical graphic model and deep learning. Specifically, we first investigate a new extended probabilistic Latent Semantic Analysis model to integrate the visual and textual information from medical images to bridge the semantic gap. We then develop a new deep Boltzmann machine-based multimodal learning model to learn the joint density model from multimodal information in order to derive the missing modality. Experimental results with large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system. PMID:26309389
3D shape recovery from image focus using gray level co-occurrence matrix
NASA Astrophysics Data System (ADS)
Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid
2018-04-01
Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.
NASA Astrophysics Data System (ADS)
Nikiforov, M. P.; Reukov, V. V.; Thompson, G. L.; Vertegel, A. A.; Guo, S.; Kalinin, S. V.; Jesse, S.
2009-10-01
Functional recognition imaging in scanning probe microscopy (SPM) using artificial neural network identification is demonstrated. This approach utilizes statistical analysis of complex SPM responses at a single spatial location to identify the target behavior, which is reminiscent of associative thinking in the human brain, obviating the need for analytical models. We demonstrate, as an example of recognition imaging, rapid identification of cellular organisms using the difference in electromechanical activity over a broad frequency range. Single-pixel identification of model Micrococcus lysodeikticus and Pseudomonas fluorescens bacteria is achieved, demonstrating the viability of the method.
A Test Strategy for High Resolution Image Scanners.
1983-10-01
for multivariate analysis. Holt, Richart and Winston, Inc., New York. Graybill , F.A., 1961: An introduction to linear statistical models . SVolume I...i , j i -(7) 02 1 )2 y 4n .i ij 13 The linear estimation model for the polynomial coefficients can be set up as - =; =(8) with T = ( x’ . . X-nn "X...Resolution Image Scanner MTF Geometrical and radiometric performance Dynamic range, linearity , noise - Dynamic scanning errors Response uniformity Skewness of
Bayesian Image Segmentations by Potts Prior and Loopy Belief Propagation
NASA Astrophysics Data System (ADS)
Tanaka, Kazuyuki; Kataoka, Shun; Yasuda, Muneki; Waizumi, Yuji; Hsu, Chiou-Ting
2014-12-01
This paper presents a Bayesian image segmentation model based on Potts prior and loopy belief propagation. The proposed Bayesian model involves several terms, including the pairwise interactions of Potts models, and the average vectors and covariant matrices of Gauss distributions in color image modeling. These terms are often referred to as hyperparameters in statistical machine learning theory. In order to determine these hyperparameters, we propose a new scheme for hyperparameter estimation based on conditional maximization of entropy in the Potts prior. The algorithm is given based on loopy belief propagation. In addition, we compare our conditional maximum entropy framework with the conventional maximum likelihood framework, and also clarify how the first order phase transitions in loopy belief propagations for Potts models influence our hyperparameter estimation procedures.
Vardhanabhuti, Varut; James, Julia; Nensey, Rehaan; Hyde, Christopher; Roobottom, Carl
2015-05-01
To compare image quality on computed tomographic colonography (CTC) acquired at standard dose (STD) and low dose (LD) using filtered-back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction (MBIR) techniques. A total of 65 symptomatic patients were prospectively enrolled for the study and underwent STD and LD CTC with filtered-back projection, adaptive statistical iterative reconstruction, and MBIR to allow direct per-patient comparison. Objective image noise, subjective image analyses, and polyp detection were assessed. Objective image noise analysis demonstrates significant noise reduction using MBIR technique (P < .05) despite being acquired at lower doses. Subjective image analyses were superior for LD MBIR in all parameters except visibility of extracolonic lesions (two-dimensional) and visibility of colonic wall (three-dimensional) where there were no significant differences. There was no significant difference in polyp detection rates (P > .05). Doses: LD (dose-length product, 257.7), STD (dose-length product, 483.6). LD MBIR CTC objectively shows improved image noise using parameters in our study. Subjectively, image quality is maintained. Polyp detection shows no significant difference but because of small numbers needs further validation. Average dose reduction of 47% can be achieved. This study confirms feasibility of using MBIR in this context of CTC in symptomatic population. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Rapid Non-Gaussian Uncertainty Quantification of Seismic Velocity Models and Images
NASA Astrophysics Data System (ADS)
Ely, G.; Malcolm, A. E.; Poliannikov, O. V.
2017-12-01
Conventional seismic imaging typically provides a single estimate of the subsurface without any error bounds. Noise in the observed raw traces as well as the uncertainty of the velocity model directly impact the uncertainty of the final seismic image and its resulting interpretation. We present a Bayesian inference framework to quantify uncertainty in both the velocity model and seismic images, given noise statistics of the observed data.To estimate velocity model uncertainty, we combine the field expansion method, a fast frequency domain wave equation solver, with the adaptive Metropolis-Hastings algorithm. The speed of the field expansion method and its reduced parameterization allows us to perform the tens or hundreds of thousands of forward solves needed for non-parametric posterior estimations. We then migrate the observed data with the distribution of velocity models to generate uncertainty estimates of the resulting subsurface image. This procedure allows us to create both qualitative descriptions of seismic image uncertainty and put error bounds on quantities of interest such as the dip angle of a subduction slab or thickness of a stratigraphic layer.
Dose fractionation theorem in 3-D reconstruction (tomography)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glaeser, R.M.
It is commonly assumed that the large number of projections for single-axis tomography precludes its application to most beam-labile specimens. However, Hegerl and Hoppe have pointed out that the total dose required to achieve statistical significance for each voxel of a computed 3-D reconstruction is the same as that required to obtain a single 2-D image of that isolated voxel, at the same level of statistical significance. Thus a statistically significant 3-D image can be computed from statistically insignificant projections, as along as the total dosage that is distributed among these projections is high enough that it would have resultedmore » in a statistically significant projection, if applied to only one image. We have tested this critical theorem by simulating the tomographic reconstruction of a realistic 3-D model created from an electron micrograph. The simulations verify the basic conclusions of high absorption, signal-dependent noise, varying specimen contrast and missing angular range. Furthermore, the simulations demonstrate that individual projections in the series of fractionated-dose images can be aligned by cross-correlation because they contain significant information derived from the summation of features from different depths in the structure. This latter information is generally not useful for structural interpretation prior to 3-D reconstruction, owing to the complexity of most specimens investigated by single-axis tomography. These results, in combination with dose estimates for imaging single voxels and measurements of radiation damage in the electron microscope, demonstrate that it is feasible to use single-axis tomography with soft X-ray microscopy of frozen-hydrated specimens.« less
Montesantos, Spyridon; Katz, Ira; Pichelin, Marine; Caillibotte, Georges
2016-01-01
A quantitative description of the morphology of lung structure is essential prior to any form of predictive modeling of ventilation or aerosol deposition implemented within the lung. The human lung is a very complex organ, with airway structures that span two orders of magnitude and having a multitude of interfaces between air, tissue and blood. As such, current medical imaging protocols cannot provide medical practitioners and researchers with in-vivo knowledge of deeper lung structures. In this work a detailed algorithm for the generation of an individualized 3D deterministic model of the conducting part of the human tracheo-bronchial tree is described. Distinct initial conditions were obtained from the high-resolution computed tomography (HRCT) images of seven healthy volunteers. The algorithm developed is fractal in nature and is implemented as a self-similar space sub-division procedure. The expansion process utilizes physiologically realistic relationships and thresholds to produce an anatomically consistent human airway tree. The model was validated through extensive statistical analysis of the results and comparison of the most common morphological features with previously published morphometric studies and other equivalent models. The resulting trees were shown to be in good agreement with published human lung geometric characteristics and can be used to study, among other things, structure-function relationships in simulation studies.
Cloherty, Shaun L; Hietanen, Markus A; Suaning, Gregg J; Ibbotson, Michael R
2010-01-01
We performed optical intrinsic signal imaging of cat primary visual cortex (Area 17 and 18) while delivering bipolar electrical stimulation to the retina by way of a supra-choroidal electrode array. Using a general linear model (GLM) analysis we identified statistically significant (p < 0.01) activation in a localized region of cortex following supra-threshold electrical stimulation at a single retinal locus. (1) demonstrate that intrinsic signal imaging combined with linear model analysis provides a powerful tool for assessing cortical responses to prosthetic stimulation, and (2) confirm that supra-choroidal electrical stimulation can achieve localized activation of the cortex consistent with focal activation of the retina.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Modeling urban expansion in Yangon, Myanmar using Landsat time-series and stereo GeoEye Images
NASA Astrophysics Data System (ADS)
Sritarapipat, Tanakorn; Takeuchi, Wataru
2016-06-01
This research proposed a methodology to model the urban expansion based dynamic statistical model using Landsat and GeoEye Images. Landsat Time-Series from 1978 to 2010 have been applied to extract land covers from the past to the present. Stereo GeoEye Images have been employed to obtain the height of the building. The class translation was obtained by observing land cover from the past to the present. The height of the building can be used to detect the center of the urban area (mainly commercial area). It was assumed that the class translation and the distance of multi-centers of the urban area also the distance of the roads affect the urban growth. The urban expansion model based on the dynamic statistical model was defined to refer to three factors; (1) the class translation, (2) the distance of the multicenters of the urban areas, and (3) the distance from the roads. Estimation and prediction of urban expansion by using our model were formulated and expressed in this research. The experimental area was set up in Yangon, Myanmar. Since it is the major of country's economic with more than five million population and the urban areas have rapidly increased. The experimental results indicated that our model of urban expansion estimated urban growth in both estimation and prediction steps in efficiency.
Parameterization of sparse vegetation in thermal images of natural ground landscapes
NASA Astrophysics Data System (ADS)
Agassi, Eyal; Ben-Yosef, Nissim
1997-10-01
The radiant statistics of thermal images of desert terrain scenes and their temporal behavior have been fully understood and well modeled. Unlike desert scenes, most natural terrestrial landscapes contain vegetative objects. A plant is a living object that regulates its temperature through evapotranspiration of leaf stomata, and plant interaction with the outside world is influenced by its physiological processes. Therefore, the heat balance equation for a vegetative object differs from that for an inorganic surface element. Despite this difficulty, plants can be incorporated into the desert surface model when an effective heat conduction parameter is associated with vegetation. Due to evapotranspiration, the effective heat conduction of plants during daytime is much higher than at night. As a result, plants (mainly trees and bushes) are usually the coldest objects in the scene in the daytime while they are not necessarily the warmest objects at night. The parameterization of vegetative objects in terms of effective heat conduction enables the extension of the desert terrain model for scenes with sparse vegetation and the estimation of their radiant statistics and their diurnal behavior. The effective heat conduction image can serve as a tool for vegetation type classification and assessment of the dominant physical process that determinate thermal image properties.
The lawful imprecision of human surface tilt estimation in natural scenes
2018-01-01
Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. PMID:29384477
The lawful imprecision of human surface tilt estimation in natural scenes.
Kim, Seha; Burge, Johannes
2018-01-31
Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. © 2018, Kim et al.
A quantitative study of nanoparticle skin penetration with interactive segmentation.
Lee, Onseok; Lee, See Hyun; Jeong, Sang Hoon; Kim, Jaeyoung; Ryu, Hwa Jung; Oh, Chilhwan; Son, Sang Wook
2016-10-01
In the last decade, the application of nanotechnology techniques has expanded within diverse areas such as pharmacology, medicine, and optical science. Despite such wide-ranging possibilities for implementation into practice, the mechanisms behind nanoparticle skin absorption remain unknown. Moreover, the main mode of investigation has been qualitative analysis. Using interactive segmentation, this study suggests a method of objectively and quantitatively analyzing the mechanisms underlying the skin absorption of nanoparticles. Silica nanoparticles (SNPs) were assessed using transmission electron microscopy and applied to the human skin equivalent model. Captured fluorescence images of this model were used to evaluate degrees of skin penetration. These images underwent interactive segmentation and image processing in addition to statistical quantitative analyses of calculated image parameters including the mean, integrated density, skewness, kurtosis, and area fraction. In images from both groups, the distribution area and intensity of fluorescent silica gradually increased in proportion to time. Since statistical significance was achieved after 2 days in the negative charge group and after 4 days in the positive charge group, there is a periodic difference. Furthermore, the quantity of silica per unit area showed a dramatic change after 6 days in the negative charge group. Although this quantitative result is identical to results obtained by qualitative assessment, it is meaningful in that it was proven by statistical analysis with quantitation by using image processing. The present study suggests that the surface charge of SNPs could play an important role in the percutaneous absorption of NPs. These findings can help achieve a better understanding of the percutaneous transport of NPs. In addition, these results provide important guidance for the design of NPs for biomedical applications.
Assessment of corneal properties based on statistical modeling of OCT speckle.
Jesus, Danilo A; Iskander, D Robert
2017-01-01
A new approach to assess the properties of the corneal micro-structure in vivo based on the statistical modeling of speckle obtained from Optical Coherence Tomography (OCT) is presented. A number of statistical models were proposed to fit the corneal speckle data obtained from OCT raw image. Short-term changes in corneal properties were studied by inducing corneal swelling whereas age-related changes were observed analyzing data of sixty-five subjects aged between twenty-four and seventy-three years. Generalized Gamma distribution has shown to be the best model, in terms of the Akaike's Information Criterion, to fit the OCT corneal speckle. Its parameters have shown statistically significant differences (Kruskal-Wallis, p < 0.001) for short and age-related corneal changes. In addition, it was observed that age-related changes influence the corneal biomechanical behaviour when corneal swelling is induced. This study shows that Generalized Gamma distribution can be utilized to modeling corneal speckle in OCT in vivo providing complementary quantified information where micro-structure of corneal tissue is of essence.
Predicting clinical diagnosis in Huntington's disease: An imaging polymarker
Daws, Richard E.; Soreq, Eyal; Johnson, Eileanoir B.; Scahill, Rachael I.; Tabrizi, Sarah J.; Barker, Roger A.; Hampshire, Adam
2018-01-01
Objective Huntington's disease (HD) gene carriers can be identified before clinical diagnosis; however, statistical models for predicting when overt motor symptoms will manifest are too imprecise to be useful at the level of the individual. Perfecting this prediction is integral to the search for disease modifying therapies. This study aimed to identify an imaging marker capable of reliably predicting real‐life clinical diagnosis in HD. Method A multivariate machine learning approach was applied to resting‐state and structural magnetic resonance imaging scans from 19 premanifest HD gene carriers (preHD, 8 of whom developed clinical disease in the 5 years postscanning) and 21 healthy controls. A classification model was developed using cross‐group comparisons between preHD and controls, and within the preHD group in relation to “estimated” and “actual” proximity to disease onset. Imaging measures were modeled individually, and combined, and permutation modeling robustly tested classification accuracy. Results Classification performance for preHDs versus controls was greatest when all measures were combined. The resulting polymarker predicted converters with high accuracy, including those who were not expected to manifest in that time scale based on the currently adopted statistical models. Interpretation We propose that a holistic multivariate machine learning treatment of brain abnormalities in the premanifest phase can be used to accurately identify those patients within 5 years of developing motor features of HD, with implications for prognostication and preclinical trials. Ann Neurol 2018;83:532–543 PMID:29405351
Ernst, Udo A.; Schiffer, Alina; Persike, Malte; Meinhardt, Günter
2016-01-01
Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2 × 2 grating patch arrangements (plaids), which differed in spatial frequency composition (low, high, or mixed), number of grating patch co-alignments (0, 1, or 2), and inter-patch distances (1° and 2° of visual angle). Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS) among detector families tuned to the four retinal positions. For 1° distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2° distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained at larger inter-patch distances. However, likelihoods were almost independent of inter-patch distance, implying that natural image statistics could not explain the crowding-like results at short distances. This failure of natural image statistics to resolve the patch distance modulation of plaid visibility remains a challenge to the approach. PMID:27757076
Real-world visual statistics and infants' first-learned object names
Clerkin, Elizabeth M.; Hart, Elizabeth; Rehg, James M.; Yu, Chen
2017-01-01
We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present—a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872373
Ren, Jiayin; Zhou, Zhongwei; Li, Peng; Tang, Wei; Guo, Jixiang; Wang, Hu; Tian, Weidong
2016-09-01
This study aimed to evaluate an innovative workflow for maxillofacial fracture surgery planning and surgical splint designing. The maxillofacial multislice computerized tomography (MSCT) data and dental cone beam computerized tomography (CBCT) data both were obtained from 40 normal adults and 58 adults who suffered fractures. The each part of the CBCT dentition image was registered into MSCT image by the use of the iterative closest point algorithm. Volume evaluation of the virtual splints that were designed by the registered MSCT images and MSCT images of the same object was performed. Eighteen patients (group 1) were operated without any splint. Twenty-one (group 2) and 19 patients (group 3) used the splints designed according to the MSCT images and registered MSCT images, respectively. The authors' results showed that the mean errors between the 2 models ranged from 0.53 to 0.92 mm and the RMS errors ranged from 0.38 to 0.69 mm in fracture patients. The mean errors between the 2 models ranged from 0.47 to 0.85 mm and the RMS errors ranged from 0.33 to 0.71 mm in normal adults. 72.22% patients in group 1 recovered occlusion. 85.71% patients in group 2, and 94.73% patients in group 3 reconstructed occlusion. There was a statistically significant difference between the MSCT images based splints' volume and the registered MSCT splints' volume in patients (P <0.05). The MSCT images based splints' volume was statistically significantly distinct from the registered MSCT splints' volume in normal adults (P <0.05). There was a statistically significant difference between the MSCT images based splints' volume and the registered MSCT splints' volume in patients and normal adults (P <0.05). The occlusion recovery rate of group 3 was better than that of group 1 and group 2. The way of integrating CBCT images into MSCT images for splints designing was feasible. The volume of the splints designed by MSCT images tended to be smaller than the splints designed by the integrated MSCT images. The patients operated with splints tended to regain occlusion. The patients who were operated with the splints which were designed according to registered MSCT images tended to get occlusal recovered.
NASA Astrophysics Data System (ADS)
Yuksel, Kivanc; Chang, Xin; Skarbek, Władysław
2017-08-01
The novel smile recognition algorithm is presented based on extraction of 68 facial salient points (fp68) using the ensemble of regression trees. The smile detector exploits the Support Vector Machine linear model. It is trained with few hundreds exemplar images by SVM algorithm working in 136 dimensional space. It is shown by the strict statistical data analysis that such geometric detector strongly depends on the geometry of mouth opening area, measured by triangulation of outer lip contour. To this goal two Bayesian detectors were developed and compared with SVM detector. The first uses the mouth area in 2D image, while the second refers to the mouth area in 3D animated face model. The 3D modeling is based on Candide-3 model and it is performed in real time along with three smile detectors and statistics estimators. The mouth area/Bayesian detectors exhibit high correlation with fp68/SVM detector in a range [0:8; 1:0], depending mainly on light conditions and individual features with advantage of 3D technique, especially in hard light conditions.
2016-06-01
theories of the mammalian visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown...test, computer vision, semantic description , street scenes, belief propagation, generative models, nonlinear filtering, sufficient statistics 16...visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown team was on single images
1991-12-01
9 2.6.1 Multi-Shape Detection. .. .. .. .. .. .. ...... 9 Page 2.6.2 Line Segment Extraction and Re-Combination.. 9 2.6.3 Planimetric Feature... Extraction ............... 10 2.6.4 Line Segment Extraction From Statistical Texture Analysis .............................. 11 2.6.5 Edge Following as Graph...image after image, could benefit clue to the fact that major spatial characteristics of subregions could be extracted , and minor spatial changes could be
Knowledge Extraction from Atomically Resolved Images.
Vlcek, Lukas; Maksov, Artem; Pan, Minghu; Vasudevan, Rama K; Kalinin, Sergei V
2017-10-24
Tremendous strides in experimental capabilities of scanning transmission electron microscopy and scanning tunneling microscopy (STM) over the past 30 years made atomically resolved imaging routine. However, consistent integration and use of atomically resolved data with generative models is unavailable, so information on local thermodynamics and other microscopic driving forces encoded in the observed atomic configurations remains hidden. Here, we present a framework based on statistical distance minimization to consistently utilize the information available from atomic configurations obtained from an atomically resolved image and extract meaningful physical interaction parameters. We illustrate the applicability of the framework on an STM image of a FeSe x Te 1-x superconductor, with the segregation of the chalcogen atoms investigated using a nonideal interacting solid solution model. This universal method makes full use of the microscopic degrees of freedom sampled in an atomically resolved image and can be extended via Bayesian inference toward unbiased model selection with uncertainty quantification.
Hyperspectral imaging simulation of object under sea-sky background
NASA Astrophysics Data System (ADS)
Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui
2016-10-01
Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.
NASA Astrophysics Data System (ADS)
Folkert, Michael R.; Setton, Jeremy; Apte, Aditya P.; Grkovski, Milan; Young, Robert J.; Schöder, Heiko; Thorstad, Wade L.; Lee, Nancy Y.; Deasy, Joseph O.; Oh, Jung Hun
2017-07-01
In this study, we investigate the use of imaging feature-based outcomes research (‘radiomics’) combined with machine learning techniques to develop robust predictive models for the risk of all-cause mortality (ACM), local failure (LF), and distant metastasis (DM) following definitive chemoradiation therapy (CRT). One hundred seventy four patients with stage III-IV oropharyngeal cancer (OC) treated at our institution with CRT with retrievable pre- and post-treatment 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) scans were identified. From pre-treatment PET scans, 24 representative imaging features of FDG-avid disease regions were extracted. Using machine learning-based feature selection methods, multiparameter logistic regression models were built incorporating clinical factors and imaging features. All model building methods were tested by cross validation to avoid overfitting, and final outcome models were validated on an independent dataset from a collaborating institution. Multiparameter models were statistically significant on 5 fold cross validation with the area under the receiver operating characteristic curve (AUC) = 0.65 (p = 0.004), 0.73 (p = 0.026), and 0.66 (p = 0.015) for ACM, LF, and DM, respectively. The model for LF retained significance on the independent validation cohort with AUC = 0.68 (p = 0.029) whereas the models for ACM and DM did not reach statistical significance, but resulted in comparable predictive power to the 5 fold cross validation with AUC = 0.60 (p = 0.092) and 0.65 (p = 0.062), respectively. In the largest study of its kind to date, predictive features including increasing metabolic tumor volume, increasing image heterogeneity, and increasing tumor surface irregularity significantly correlated to mortality, LF, and DM on 5 fold cross validation in a relatively uniform single-institution cohort. The LF model also retained significance in an independent population.
Multi-Reader ROC studies with Split-Plot Designs: A Comparison of Statistical Methods
Obuchowski, Nancy A.; Gallas, Brandon D.; Hillis, Stephen L.
2012-01-01
Rationale and Objectives Multi-reader imaging trials often use a factorial design, where study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of the design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper we compare three methods of analysis for the split-plot design. Materials and Methods Three statistical methods are presented: Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean ANOVA approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power and confidence interval coverage of the three test statistics. Results The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% CIs fall close to the nominal coverage for small and large sample sizes. Conclusions The split-plot MRMC study design can be statistically efficient compared with the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rate, similar power, and nominal CI coverage, are available for this study design. PMID:23122570
Multi-reader ROC studies with split-plot designs: a comparison of statistical methods.
Obuchowski, Nancy A; Gallas, Brandon D; Hillis, Stephen L
2012-12-01
Multireader imaging trials often use a factorial design, in which study patients undergo testing with all imaging modalities and readers interpret the results of all tests for all patients. A drawback of this design is the large number of interpretations required of each reader. Split-plot designs have been proposed as an alternative, in which one or a subset of readers interprets all images of a sample of patients, while other readers interpret the images of other samples of patients. In this paper, the authors compare three methods of analysis for the split-plot design. Three statistical methods are presented: the Obuchowski-Rockette method modified for the split-plot design, a newly proposed marginal-mean analysis-of-variance approach, and an extension of the three-sample U-statistic method. A simulation study using the Roe-Metz model was performed to compare the type I error rate, power, and confidence interval coverage of the three test statistics. The type I error rates for all three methods are close to the nominal level but tend to be slightly conservative. The statistical power is nearly identical for the three methods. The coverage of 95% confidence intervals falls close to the nominal coverage for small and large sample sizes. The split-plot multireader, multicase study design can be statistically efficient compared to the factorial design, reducing the number of interpretations required per reader. Three methods of analysis, shown to have nominal type I error rates, similar power, and nominal confidence interval coverage, are available for this study design. Copyright © 2012 AUR. All rights reserved.
Pauli structures arising from confined particles interacting via a statistical potential
NASA Astrophysics Data System (ADS)
Batle, Josep; Ciftja, Orion; Farouk, Ahmed; Alkhambashi, Majid; Abdalla, Soliman
2017-09-01
There have been suggestions that the Pauli exclusion principle alone can lead a non-interacting (free) system of identical fermions to form crystalline structures dubbed Pauli crystals. Single-shot imaging experiments for the case of ultra-cold systems of free spin-polarized fermionic atoms in a two-dimensional harmonic trap appear to show geometric arrangements that cannot be characterized as Wigner crystals. This work explores this idea and considers a well-known approach that enables one to treat a quantum system of free fermions as a system of classical particles interacting with a statistical interaction potential. The model under consideration, though classical in nature, incorporates the quantum statistics by endowing the classical particles with an effective interaction potential. The reasonable expectation is that possible Pauli crystal features seen in experiments may manifest in this model that captures the correct quantum statistics as a first order correction. We use the Monte Carlo simulated annealing method to obtain the most stable configurations of finite two-dimensional systems of confined particles that interact with an appropriate statistical repulsion potential. We consider both an isotropic harmonic and a hard-wall confinement potential. Despite minor differences, the most stable configurations observed in our model correspond to the reported Pauli crystals in single-shot imaging experiments of free spin-polarized fermions in a harmonic trap. The crystalline configurations observed appear to be different from the expected classical Wigner crystal structures that would emerge should the confined classical particles had interacted with a pair-wise Coulomb repulsion.
Statistical Deconvolution for Superresolution Fluorescence Microscopy
Mukamel, Eran A.; Babcock, Hazen; Zhuang, Xiaowei
2012-01-01
Superresolution microscopy techniques based on the sequential activation of fluorophores can achieve image resolution of ∼10 nm but require a sparse distribution of simultaneously activated fluorophores in the field of view. Image analysis procedures for this approach typically discard data from crowded molecules with overlapping images, wasting valuable image information that is only partly degraded by overlap. A data analysis method that exploits all available fluorescence data, regardless of overlap, could increase the number of molecules processed per frame and thereby accelerate superresolution imaging speed, enabling the study of fast, dynamic biological processes. Here, we present a computational method, referred to as deconvolution-STORM (deconSTORM), which uses iterative image deconvolution in place of single- or multiemitter localization to estimate the sample. DeconSTORM approximates the maximum likelihood sample estimate under a realistic statistical model of fluorescence microscopy movies comprising numerous frames. The model incorporates Poisson-distributed photon-detection noise, the sparse spatial distribution of activated fluorophores, and temporal correlations between consecutive movie frames arising from intermittent fluorophore activation. We first quantitatively validated this approach with simulated fluorescence data and showed that deconSTORM accurately estimates superresolution images even at high densities of activated fluorophores where analysis by single- or multiemitter localization methods fails. We then applied the method to experimental data of cellular structures and demonstrated that deconSTORM enables an approximately fivefold or greater increase in imaging speed by allowing a higher density of activated fluorophores/frame. PMID:22677393
Signal and noise modeling in confocal laser scanning fluorescence microscopy.
Herberich, Gerlind; Windoffer, Reinhard; Leube, Rudolf E; Aach, Til
2012-01-01
Fluorescence confocal laser scanning microscopy (CLSM) has revolutionized imaging of subcellular structures in biomedical research by enabling the acquisition of 3D time-series of fluorescently-tagged proteins in living cells, hence forming the basis for an automated quantification of their morphological and dynamic characteristics. Due to the inherently weak fluorescence, CLSM images exhibit a low SNR. We present a novel model for the transfer of signal and noise in CLSM that is both theoretically sound as well as corroborated by a rigorous analysis of the pixel intensity statistics via measurement of the 3D noise power spectra, signal-dependence and distribution. Our model provides a better fit to the data than previously proposed models. Further, it forms the basis for (i) the simulation of the CLSM imaging process indispensable for the quantitative evaluation of CLSM image analysis algorithms, (ii) the application of Poisson denoising algorithms and (iii) the reconstruction of the fluorescence signal.
Image registration for a UV-Visible dual-band imaging system
NASA Astrophysics Data System (ADS)
Chen, Tao; Yuan, Shuang; Li, Jianping; Xing, Sheng; Zhang, Honglong; Dong, Yuming; Chen, Liangpei; Liu, Peng; Jiao, Guohua
2018-06-01
The detection of corona discharge is an effective way for early fault diagnosis of power equipment. UV-Visible dual-band imaging can detect and locate corona discharge spot at all-weather condition. In this study, we introduce an image registration protocol for this dual-band imaging system. The protocol consists of UV image denoising and affine transformation model establishment. We report the algorithm details of UV image preprocessing, affine transformation model establishment and relevant experiments for verification of their feasibility. The denoising algorithm was based on a correlation operation between raw UV images, a continuous mask and the transformation model was established by using corner feature and a statistical method. Finally, an image fusion test was carried out to verify the accuracy of affine transformation model. It has proved the average position displacement error between corona discharge and equipment fault at different distances in a 2.5m-20 m range are 1.34 mm and 1.92 mm in the horizontal and vertical directions, respectively, which are precise enough for most industrial applications. The resultant protocol is not only expected to improve the efficiency and accuracy of such imaging system for locating corona discharge spot, but also supposed to provide a more generalized reference for the calibration of various dual-band imaging systems in practice.
Du, Bo; Zhang, Yuxiang; Zhang, Liangpei; Tao, Dacheng
2016-08-18
Hyperspectral images provide great potential for target detection, however, new challenges are also introduced for hyperspectral target detection, resulting that hyperspectral target detection should be treated as a new problem and modeled differently. Many classical detectors are proposed based on the linear mixing model and the sparsity model. However, the former type of model cannot deal well with spectral variability in limited endmembers, and the latter type of model usually treats the target detection as a simple classification problem and pays less attention to the low target probability. In this case, can we find an efficient way to utilize both the high-dimension features behind hyperspectral images and the limited target information to extract small targets? This paper proposes a novel sparsitybased detector named the hybrid sparsity and statistics detector (HSSD) for target detection in hyperspectral imagery, which can effectively deal with the above two problems. The proposed algorithm designs a hypothesis-specific dictionary based on the prior hypotheses for the test pixel, which can avoid the imbalanced number of training samples for a class-specific dictionary. Then, a purification process is employed for the background training samples in order to construct an effective competition between the two hypotheses. Next, a sparse representation based binary hypothesis model merged with additive Gaussian noise is proposed to represent the image. Finally, a generalized likelihood ratio test is performed to obtain a more robust detection decision than the reconstruction residual based detection methods. Extensive experimental results with three hyperspectral datasets confirm that the proposed HSSD algorithm clearly outperforms the stateof- the-art target detectors.
NASA Astrophysics Data System (ADS)
Feyen, Luc; Caers, Jef
2006-06-01
In this work, we address the problem of characterizing the heterogeneity and uncertainty of hydraulic properties for complex geological settings. Hereby, we distinguish between two scales of heterogeneity, namely the hydrofacies structure and the intrafacies variability of the hydraulic properties. We employ multiple-point geostatistics to characterize the hydrofacies architecture. The multiple-point statistics are borrowed from a training image that is designed to reflect the prior geological conceptualization. The intrafacies variability of the hydraulic properties is represented using conventional two-point correlation methods, more precisely, spatial covariance models under a multi-Gaussian spatial law. We address the different levels and sources of uncertainty in characterizing the subsurface heterogeneity, and explore their effect on groundwater flow and transport predictions. Typically, uncertainty is assessed by way of many images, termed realizations, of a fixed statistical model. However, in many cases, sampling from a fixed stochastic model does not adequately represent the space of uncertainty. It neglects the uncertainty related to the selection of the stochastic model and the estimation of its input parameters. We acknowledge the uncertainty inherent in the definition of the prior conceptual model of aquifer architecture and in the estimation of global statistics, anisotropy, and correlation scales. Spatial bootstrap is used to assess the uncertainty of the unknown statistical parameters. As an illustrative example, we employ a synthetic field that represents a fluvial setting consisting of an interconnected network of channel sands embedded within finer-grained floodplain material. For this highly non-stationary setting we quantify the groundwater flow and transport model prediction uncertainty for various levels of hydrogeological uncertainty. Results indicate the importance of accurately describing the facies geometry, especially for transport predictions.
Local image statistics: maximum-entropy constructions and perceptual salience
Victor, Jonathan D.; Conte, Mary M.
2012-01-01
The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics—including luminance distributions, pair-wise correlations, and higher-order correlations—are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions. PMID:22751397
Weakly supervised image semantic segmentation based on clustering superpixels
NASA Astrophysics Data System (ADS)
Yan, Xiong; Liu, Xiaohua
2018-04-01
In this paper, we propose an image semantic segmentation model which is trained from image-level labeled images. The proposed model starts with superpixel segmenting, and features of the superpixels are extracted by trained CNN. We introduce a superpixel-based graph followed by applying the graph partition method to group correlated superpixels into clusters. For the acquisition of inter-label correlations between the image-level labels in dataset, we not only utilize label co-occurrence statistics but also exploit visual contextual cues simultaneously. At last, we formulate the task of mapping appropriate image-level labels to the detected clusters as a problem of convex minimization. Experimental results on MSRC-21 dataset and LableMe dataset show that the proposed method has a better performance than most of the weakly supervised methods and is even comparable to fully supervised methods.
Bruse, Jan L; McLeod, Kristin; Biglino, Giovanni; Ntsinjana, Hopewell N; Capelli, Claudio; Hsia, Tain-Yen; Sermesant, Maxime; Pennec, Xavier; Taylor, Andrew M; Schievano, Silvia
2016-05-31
Medical image analysis in clinical practice is commonly carried out on 2D image data, without fully exploiting the detailed 3D anatomical information that is provided by modern non-invasive medical imaging techniques. In this paper, a statistical shape analysis method is presented, which enables the extraction of 3D anatomical shape features from cardiovascular magnetic resonance (CMR) image data, with no need for manual landmarking. The method was applied to repaired aortic coarctation arches that present complex shapes, with the aim of capturing shape features as biomarkers of potential functional relevance. The method is presented from the user-perspective and is evaluated by comparing results with traditional morphometric measurements. Steps required to set up the statistical shape modelling analyses, from pre-processing of the CMR images to parameter setting and strategies to account for size differences and outliers, are described in detail. The anatomical mean shape of 20 aortic arches post-aortic coarctation repair (CoA) was computed based on surface models reconstructed from CMR data. By analysing transformations that deform the mean shape towards each of the individual patient's anatomy, shape patterns related to differences in body surface area (BSA) and ejection fraction (EF) were extracted. The resulting shape vectors, describing shape features in 3D, were compared with traditionally measured 2D and 3D morphometric parameters. The computed 3D mean shape was close to population mean values of geometric shape descriptors and visually integrated characteristic shape features associated with our population of CoA shapes. After removing size effects due to differences in body surface area (BSA) between patients, distinct 3D shape features of the aortic arch correlated significantly with EF (r = 0.521, p = .022) and were well in agreement with trends as shown by traditional shape descriptors. The suggested method has the potential to discover previously unknown 3D shape biomarkers from medical imaging data. Thus, it could contribute to improving diagnosis and risk stratification in complex cardiac disease.
Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T
The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P < 0.05). Adaptive statistical iterative reconstruction-V 90% showed superior LCD and had the highest CNR in the liver, aorta, and, pancreas, measuring 7.32 ± 3.22, 11.60 ± 4.25, and 4.60 ± 2.31, respectively, compared with the next best series of ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P <0.0001). Veo 3.0 and ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.
Associative memory model for searching an image database by image snippet
NASA Astrophysics Data System (ADS)
Khan, Javed I.; Yun, David Y.
1994-09-01
This paper presents an associative memory called an multidimensional holographic associative computing (MHAC), which can be potentially used to perform feature based image database query using image snippet. MHAC has the unique capability to selectively focus on specific segments of a query frame during associative retrieval. As a result, this model can perform search on the basis of featural significance described by a subset of the snippet pixels. This capability is critical for visual query in image database because quite often the cognitive index features in the snippet are statistically weak. Unlike, the conventional artificial associative memories, MHAC uses a two level representation and incorporates additional meta-knowledge about the reliability status of segments of information it receives and forwards. In this paper we present the analysis of focus characteristics of MHAC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian; Maier, Joscha; Sawall, Stefan
2016-07-15
Purpose: To introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial–spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied. Methods: An IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector inmore » a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial–spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm. Results: The results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account. Conclusions: The IMA is computationally efficient as it required about 10{sup 2} random numbers per ray incident on a detector pixel instead of an estimated 10{sup 8} random numbers per ray as Monte Carlo approaches would need. The spatial–spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.« less
ERIC Educational Resources Information Center
Delaney, Michael F.
1984-01-01
This literature review on chemometrics (covering December 1981 to December 1983) is organized under these headings: personal supermicrocomputers; education and books; statistics; modeling and parameter estimation; resolution; calibration; signal processing; image analysis; factor analysis; pattern recognition; optimization; artificial…
Bjorgan, Asgeir; Randeberg, Lise Lyngsnes
2015-01-01
Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717
A Kp-based model of auroral boundaries
NASA Astrophysics Data System (ADS)
Carbary, James F.
2005-10-01
The auroral oval can serve as both a representation and a prediction of space weather on a global scale, so a competent model of the oval as a function of a geomagnetic index could conveniently appraise space weather itself. A simple model of the auroral boundaries is constructed by binning several months of images from the Polar Ultraviolet Imager by Kp index. The pixel intensities are first averaged into magnetic latitude-magnetic local time (MLT-MLAT) and local time bins, and intensity profiles are then derived for each Kp level at 1 hour intervals of MLT. After background correction, the boundary latitudes of each profile are determined at a threshold of 4 photons cm-2 s1. The peak locations and peak intensities are also found. The boundary and peak locations vary linearly with Kp index, and the coefficients of the linear fits are tabulated for each MLT. As a general rule of thumb, the UV intensity peak shifts 1° in magnetic latitude for each increment in Kp. The fits are surprisingly good for Kp < 6 but begin to deteriorate at high Kp because of auroral boundary irregularities and poor statistics. The statistical model allows calculation of the auroral boundaries at most MLTs as a function of Kp and can serve as an approximation to the shape and extent of the statistical oval.
A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.
Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W
2005-01-01
We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.
NASA Astrophysics Data System (ADS)
Ushenko, Alexander G.; Dubolazov, Alexander V.; Ushenko, Vladimir A.; Novakovskaya, Olga Y.
2016-07-01
The optical model of formation of polarization structure of laser radiation scattered by polycrystalline networks of human skin in Fourier plane was elaborated. The results of investigation of the values of statistical (statistical moments of the 1st to 4th order) parameters of polarization-inhomogeneous images of skin surface in Fourier plane were presented. The diagnostic criteria of pathological process in human skin and its severity degree differentiation were determined.
Machine learning for neuroimaging with scikit-learn.
Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël
2014-01-01
Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain.
Machine learning for neuroimaging with scikit-learn
Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël
2014-01-01
Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain. PMID:24600388
Perez-Ponce, Hector; Daul, Christian; Wolf, Didier; Noel, Alain
2013-08-01
In mammography, image quality assessment has to be directly related to breast cancer indicator (e.g. microcalcifications) detectability. Recently, we proposed an X-ray source/digital detector (XRS/DD) model leading to such an assessment. This model simulates very realistic contrast-detail phantom (CDMAM) images leading to gold disc (representing microcalcifications) detectability thresholds that are very close to those of real images taken under the simulated acquisition conditions. The detection step was performed with a mathematical observer. The aim of this contribution is to include human observers into the disc detection process in real and virtual images to validate the simulation framework based on the XRS/DD model. Mathematical criteria (contrast-detail curves, image quality factor, etc.) are used to assess and to compare, from the statistical point of view, the cancer indicator detectability in real and virtual images. The quantitative results given in this paper show that the images simulated by the XRS/DD model are useful for image quality assessment in the case of all studied exposure conditions using either human or automated scoring. Also, this paper confirms that with the XRS/DD model the image quality assessment can be automated and the whole time of the procedure can be drastically reduced. Compared to standard quality assessment methods, the number of images to be acquired is divided by a factor of eight. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
A novel rotational invariants target recognition method for rotating motion blurred images
NASA Astrophysics Data System (ADS)
Lan, Jinhui; Gong, Meiling; Dong, Mingwei; Zeng, Yiliang; Zhang, Yuzhen
2017-11-01
The imaging of the image sensor is blurred due to the rotational motion of the carrier and reducing the target recognition rate greatly. Although the traditional mode that restores the image first and then identifies the target can improve the recognition rate, it takes a long time to recognize. In order to solve this problem, a rotating fuzzy invariants extracted model was constructed that recognizes target directly. The model includes three metric layers. The object description capability of metric algorithms that contain gray value statistical algorithm, improved round projection transformation algorithm and rotation-convolution moment invariants in the three metric layers ranges from low to high, and the metric layer with the lowest description ability among them is as the input which can eliminate non pixel points of target region from degenerate image gradually. Experimental results show that the proposed model can improve the correct target recognition rate of blurred image and optimum allocation between the computational complexity and function of region.
Statistical lamb wave localization based on extreme value theory
NASA Astrophysics Data System (ADS)
Harley, Joel B.
2018-04-01
Guided wave localization methods based on delay-and-sum imaging, matched field processing, and other techniques have been designed and researched to create images that locate and describe structural damage. The maximum value of these images typically represent an estimated damage location. Yet, it is often unclear if this maximum value, or any other value in the image, is a statistically significant indicator of damage. Furthermore, there are currently few, if any, approaches to assess the statistical significance of guided wave localization images. As a result, we present statistical delay-and-sum and statistical matched field processing localization methods to create statistically significant images of damage. Our framework uses constant rate of false alarm statistics and extreme value theory to detect damage with little prior information. We demonstrate our methods with in situ guided wave data from an aluminum plate to detect two 0.75 cm diameter holes. Our results show an expected improvement in statistical significance as the number of sensors increase. With seventeen sensors, both methods successfully detect damage with statistical significance.
Incorporating signal-dependent noise for hyperspectral target detection
NASA Astrophysics Data System (ADS)
Morman, Christopher J.; Meola, Joseph
2015-05-01
The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.
Application of the quantum spin glass theory to image restoration.
Inoue, J I
2001-04-01
Quantum fluctuation is introduced into the Markov random-field model for image restoration in the context of a Bayesian approach. We investigate the dependence of the quantum fluctuation on the quality of a black and white image restoration by making use of statistical mechanics. We find that the maximum posterior marginal (MPM) estimate based on the quantum fluctuation gives a fine restoration in comparison with the maximum a posteriori estimate or the thermal fluctuation based MPM estimate.
NASA Astrophysics Data System (ADS)
Andersen, Anders H.; Rayens, William S.; Li, Ren-Cang; Blonder, Lee X.
2000-10-01
In this paper we describe the enormous potential that multilinear models hold for the analysis of data from neuroimaging experiments that rely on functional magnetic resonance imaging (MRI) or other imaging modalities. A case is made for why one might fully expect that the successful introduction of these models to the neuroscience community could define the next generation of structure-seeking paradigms in the area. In spite of the potential for immediate application, there is much to do from the perspective of statistical science. That is, although multilinear models have already been particularly successful in chemistry and psychology, relatively little is known about their statistical properties. To that end, our research group at the University of Kentucky has made significant progress. In particular, we are in the process of developing formal influence measures for multilinear methods as well as associated classification models and effective implementations. We believe that these problems will be among the most important and useful to the scientific community. Details are presented herein and an application is given in the context of facial emotion processing experiments.
NASA Astrophysics Data System (ADS)
Chen, Hsin-Chen; Lin, Chii-Jeng; Wu, Chia-Hsing; Wang, Chien-Kuo; Sun, Yung-Nien
2010-11-01
The Insall-Salvati ratio (ISR) is important for detecting two common clinical signs of knee disease: patella alta and patella baja. Furthermore, large inter-operator differences in ISR measurement make an objective measurement system necessary for better clinical evaluation. In this paper, we define three specific bony landmarks for determining the ISR and then propose an x-ray image analysis system to localize these landmarks and measure the ISR. Due to inherent artifacts in x-ray images, such as unevenly distributed intensities, which make landmark localization difficult, we hence propose a registration-assisted active-shape model (RAASM) to localize these landmarks. We first construct a statistical model from a set of training images based on x-ray image intensity and patella shape. Since a knee x-ray image contains specific anatomical structures, we then design an algorithm, based on edge tracing, for patella feature extraction in order to automatically align the model to the patella image. We can estimate the landmark locations as well as the ISR after registration-assisted model fitting. Our proposed method successfully overcomes drawbacks caused by x-ray image artifacts. Experimental results show great agreement between the ISRs measured by the proposed method and by orthopedic clinicians.
NASA Astrophysics Data System (ADS)
Nemoto, Mitsutaka; Nomura, Yukihiro; Hanaoka, Shohei; Masutani, Yoshitaka; Yoshikawa, Takeharu; Hayashi, Naoto; Yoshioka, Naoki; Ohtomo, Kuni
Anatomical point landmarks as most primitive anatomical knowledge are useful for medical image understanding. In this study, we propose a detection method for anatomical point landmark based on appearance models, which include gray-level statistical variations at point landmarks and their surrounding area. The models are built based on results of Principal Component Analysis (PCA) of sample data sets. In addition, we employed generative learning method by transforming ROI of sample data. In this study, we evaluated our method with 24 data sets of body trunk CT images and obtained 95.8 ± 7.3 % of the average sensitivity in 28 landmarks.
2008-09-04
mospheric correction. volume 3756, pages 348–353. SPIE, 1999. Daniel Birkenheuer and Seth Gutman. A Comparison of GOES Moisture-Derived Product and GPS...pages 417–428. SPIE, 2001. E. J. Ientilucci and S. D. Brown. Advances in wide-area hyperspectral image sim- ulation. In W. R. Watkins , D. Clement
Prostate segmentation in MR images using discriminant boundary features.
Yang, Meijuan; Li, Xuelong; Turkbey, Baris; Choyke, Peter L; Yan, Pingkun
2013-02-01
Segmentation of the prostate in magnetic resonance image has become more in need for its assistance to diagnosis and surgical planning of prostate carcinoma. Due to the natural variability of anatomical structures, statistical shape model has been widely applied in medical image segmentation. Robust and distinctive local features are critical for statistical shape model to achieve accurate segmentation results. The scale invariant feature transformation (SIFT) has been employed to capture the information of the local patch surrounding the boundary. However, when SIFT feature being used for segmentation, the scale and variance are not specified with the location of the point of interest. To deal with it, the discriminant analysis in machine learning is introduced to measure the distinctiveness of the learned SIFT features for each landmark directly and to make the scale and variance adaptive to the locations. As the gray values and gradients vary significantly over the boundary of the prostate, separate appearance descriptors are built for each landmark and then optimized. After that, a two stage coarse-to-fine segmentation approach is carried out by incorporating the local shape variations. Finally, the experiments on prostate segmentation from MR image are conducted to verify the efficiency of the proposed algorithms.
Natural image statistics mediate brightness 'filling in'.
Dakin, Steven C; Bex, Peter J
2003-11-22
Although the human visual system can accurately estimate the reflectance (or lightness) of surfaces under enormous variations in illumination, two equiluminant grey regions can be induced to appear quite different simply by placing a light-dark luminance transition between them. This illusion, the Craik-Cornsweet-O'Brien (CCOB) effect, has been taken as evidence for a low-level 'filling-in' mechanism subserving lightness perception. Here, we present evidence that the mechanism responsible for the CCOB effect operates not via propagation of a neural signal across space but by amplification of the low spatial frequency (SF) structure of the image. We develop a simple computational model that relies on the statistics of natural scenes actively to reconstruct the image that is most likely to have caused an observed series of responses across SF channels. This principle is tested psychophysically by deriving classification images (CIs) for subjects' discrimination of the contrast polarity of CCOB stimuli masked with noise. CIs resemble 'filled-in' stimuli; i.e. observers rely on portions of the stimuli that contain no information per se but that correspond closely to the reported perceptual completion. As predicted by the model, the filling-in process is contingent on the presence of appropriate low SF structure.
Simulation of target interpretation based on infrared image features and psychology principle
NASA Astrophysics Data System (ADS)
Lin, Wei; Chen, Yu-hua; Gao, Hong-sheng; Wang, Zhan-feng; Wang, Ji-jun; Su, Rong-hua; Huang, Yan-ping
2009-07-01
It's an important and complicated process in target interpretation that target features extraction and identification, which effect psychosensorial quantity of interpretation person to target infrared image directly, and decide target viability finally. Using statistical decision theory and psychology principle, designing four psychophysical experiment, the interpretation model of the infrared target is established. The model can get target detection probability by calculating four features similarity degree between target region and background region, which were plotted out on the infrared image. With the verification of a great deal target interpretation in practice, the model can simulate target interpretation and detection process effectively, get the result of target interpretation impersonality, which can provide technique support for target extraction, identification and decision-making.
A Statistical Analysis of Brain Morphology Using Wild Bootstrapping
Ibrahim, Joseph G.; Tang, Niansheng; Rowe, Daniel B.; Hao, Xuejun; Bansal, Ravi; Peterson, Bradley S.
2008-01-01
Methods for the analysis of brain morphology, including voxel-based morphology and surface-based morphometries, have been used to detect associations between brain structure and covariates of interest, such as diagnosis, severity of disease, age, IQ, and genotype. The statistical analysis of morphometric measures usually involves two statistical procedures: 1) invoking a statistical model at each voxel (or point) on the surface of the brain or brain subregion, followed by mapping test statistics (e.g., t test) or their associated p values at each of those voxels; 2) correction for the multiple statistical tests conducted across all voxels on the surface of the brain region under investigation. We propose the use of new statistical methods for each of these procedures. We first use a heteroscedastic linear model to test the associations between the morphological measures at each voxel on the surface of the specified subregion (e.g., cortical or subcortical surfaces) and the covariates of interest. Moreover, we develop a robust test procedure that is based on a resampling method, called wild bootstrapping. This procedure assesses the statistical significance of the associations between a measure of given brain structure and the covariates of interest. The value of this robust test procedure lies in its computationally simplicity and in its applicability to a wide range of imaging data, including data from both anatomical and functional magnetic resonance imaging (fMRI). Simulation studies demonstrate that this robust test procedure can accurately control the family-wise error rate. We demonstrate the application of this robust test procedure to the detection of statistically significant differences in the morphology of the hippocampus over time across gender groups in a large sample of healthy subjects. PMID:17649909
Tang, Jie; Nett, Brian E; Chen, Guang-Hong
2009-10-07
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
NASA Astrophysics Data System (ADS)
Thelen, Brian T.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.
2017-04-01
With all of the new remote sensing modalities available, and with ever increasing capabilities and frequency of collection, there is a desire to fundamentally understand/quantify the information content in the collected image data relative to various exploitation goals, such as detection/classification. A fundamental approach for this is the framework of Bayesian decision theory, but a daunting challenge is to have significantly flexible and accurate multivariate models for the features and/or pixels that capture a wide assortment of distributions and dependen- cies. In addition, data can come in the form of both continuous and discrete representations, where the latter is often generated based on considerations of robustness to imaging conditions and occlusions/degradations. In this paper we propose a novel suite of "latent" models fundamentally based on multivariate Gaussian copula models that can be used for quantized data from SAR imagery. For this Latent Gaussian Copula (LGC) model, we derive an approximate, maximum-likelihood estimation algorithm and demonstrate very reasonable estimation performance even for the larger images with many pixels. However applying these LGC models to large dimen- sions/images within a Bayesian decision/classification theory is infeasible due to the computational/numerical issues in evaluating the true full likelihood, and we propose an alternative class of novel pseudo-likelihoood detection statistics that are computationally feasible. We show in a few simple examples that these statistics have the potential to provide very good and robust detection/classification performance. All of this framework is demonstrated on a simulated SLICY data set, and the results show the importance of modeling the dependencies, and of utilizing the pseudo-likelihood methods.
Interpreting support vector machine models for multivariate group wise analysis in neuroimaging
Gaonkar, Bilwaj; Shinohara, Russell T; Davatzikos, Christos
2015-01-01
Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier’s decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification. PMID:26210913
Multilinear Graph Embedding: Representation and Regularization for Images.
Chen, Yi-Lei; Hsu, Chiou-Ting
2014-02-01
Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.
Grootjans, Willem; Tixier, Florent; van der Vos, Charlotte S; Vriens, Dennis; Le Rest, Catherine C; Bussink, Johan; Oyen, Wim J G; de Geus-Oei, Lioe-Fee; Visvikis, Dimitris; Visser, Eric P
2016-11-01
Accurate measurement of intratumor heterogeneity using parameters of texture on PET images is essential for precise characterization of cancer lesions. In this study, we investigated the influence of respiratory motion and varying noise levels on quantification of textural parameters in patients with lung cancer. We used an optimal-respiratory-gating algorithm on the list-mode data of 60 lung cancer patients who underwent 18 F-FDG PET. The images were reconstructed using a duty cycle of 35% (percentage of the total acquired PET data). In addition, nongated images of varying statistical quality (using 35% and 100% of the PET data) were reconstructed to investigate the effects of image noise. Several global image-derived indices and textural parameters (entropy, high-intensity emphasis, zone percentage, and dissimilarity) that have been associated with patient outcome were calculated. The clinical impact of optimal respiratory gating and image noise on assessment of intratumor heterogeneity was evaluated using Cox regression models, with overall survival as the outcome measure. The threshold for statistical significance was adjusted for multiple comparisons using Bonferroni correction. In the lower lung lobes, respiratory motion significantly affected quantification of intratumor heterogeneity for all textural parameters (P < 0.007) except entropy (P > 0.007). The mean increase in entropy, dissimilarity, zone percentage, and high-intensity emphasis was 1.3% ± 1.5% (P = 0.02), 11.6% ± 11.8% (P = 0.006), 2.3% ± 2.2% (P = 0.002), and 16.8% ± 17.2% (P = 0.006), respectively. No significant differences were observed for lesions in the upper lung lobes (P > 0.007). Differences in the statistical quality of the PET images affected the textural parameters less than respiratory motion, with no significant difference observed. The median follow-up time was 35 mo (range, 7-39 mo). In multivariate analysis for overall survival, total lesion glycolysis and high-intensity emphasis were the two most relevant image-derived indices and were considered to be independent significant covariates for the model regardless of the image type considered. The tested textural parameters are robust in the presence of respiratory motion artifacts and varying levels of image noise. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Applied learning-based color tone mapping for face recognition in video surveillance system
NASA Astrophysics Data System (ADS)
Yew, Chuu Tian; Suandi, Shahrel Azmin
2012-04-01
In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.
Diurnal fluctuations in brain volume: Statistical analyses of MRI from large populations.
Nakamura, Kunio; Brown, Robert A; Narayanan, Sridar; Collins, D Louis; Arnold, Douglas L
2015-09-01
We investigated fluctuations in brain volume throughout the day using statistical modeling of magnetic resonance imaging (MRI) from large populations. We applied fully automated image analysis software to measure the brain parenchymal fraction (BPF), defined as the ratio of the brain parenchymal volume and intracranial volume, thus accounting for variations in head size. The MRI data came from serial scans of multiple sclerosis (MS) patients in clinical trials (n=755, 3269 scans) and from subjects participating in the Alzheimer's Disease Neuroimaging Initiative (ADNI, n=834, 6114 scans). The percent change in BPF was modeled with a linear mixed effect (LME) model, and the model was applied separately to the MS and ADNI datasets. The LME model for the MS datasets included random subject effects (intercept and slope over time) and fixed effects for the time-of-day, time from the baseline scan, and trial, which accounted for trial-related effects (for example, different inclusion criteria and imaging protocol). The model for ADNI additionally included the demographics (baseline age, sex, subject type [normal, mild cognitive impairment, or Alzheimer's disease], and interaction between subject type and time from baseline). There was a statistically significant effect of time-of-day on the BPF change in MS clinical trial datasets (-0.180 per day, that is, 0.180% of intracranial volume, p=0.019) as well as the ADNI dataset (-0.438 per day, that is, 0.438% of intracranial volume, p<0.0001), showing that the brain volume is greater in the morning. Linearly correcting the BPF values with the time-of-day reduced the required sample size to detect a 25% treatment effect (80% power and 0.05 significance level) on change in brain volume from 2 time-points over a period of 1year by 2.6%. Our results have significant implications for future brain volumetric studies, suggesting that there is a potential acquisition time bias that should be randomized or statistically controlled to account for the day-to-day brain volume fluctuations. Copyright © 2015 Elsevier Inc. All rights reserved.
2017-01-01
When adjusting the contrast setting on a television set, we experience a perceptual change in the global image contrast. But how is that statistic computed? We addressed this using a contrast-matching task for checkerboard configurations of micro-patterns in which the contrasts and spatial spreads of two interdigitated components were controlled independently. When the patterns differed greatly in contrast, the higher contrast determined the perceived global contrast. Crucially, however, low contrast additions of one pattern to intermediate contrasts of the other caused a paradoxical reduction in the perceived global contrast. None of the following metrics/models predicted this: max, linear sum, average, energy, root mean squared (RMS), Legge and Foley. However, a nonlinear gain control model, derived from contrast detection and discrimination experiments, incorporating wide-field summation and suppression, did predict the results with no free parameters, but only when spatial filtering was removed. We conclude that our model describes fundamental processes in human contrast vision (the pattern of results was the same for expert and naive observers), but that above threshold—when contrast pedestals are clearly visible—vision's spatial filtering characteristics become transparent, tending towards those of a delta function prior to spatial summation. The global contrast statistic from our model is as easily derived as the RMS contrast of an image, and since it more closely relates to human perception, we suggest it be used as an image contrast metric in practical applications. PMID:28989735
A new method for detecting small and dim targets in starry background
NASA Astrophysics Data System (ADS)
Yao, Rui; Zhang, Yanning; Jiang, Lei
2011-08-01
Small visible optical space targets detection is one of the key issues in the research of long-range early warning and space debris surveillance. The SNR(Signal to Noise Ratio) of the target is very low because of the self influence of image device. Random noise and background movement also increase the difficulty of target detection. In order to detect small visible optical space targets effectively and rapidly, we bring up a novel detecting method based on statistic theory. Firstly, we get a reasonable statistical model of visible optical space image. Secondly, we extract SIFT(Scale-Invariant Feature Transform) feature of the image frames, and calculate the transform relationship, then use the transform relationship to compensate whole visual field's movement. Thirdly, the influence of star was wiped off by using interframe difference method. We find segmentation threshold to differentiate candidate targets and noise by using OTSU method. Finally, we calculate statistical quantity to judge whether there is the target for every pixel position in the image. Theory analysis shows the relationship of false alarm probability and detection probability at different SNR. The experiment result shows that this method could detect target efficiently, even the target passing through stars.
Statistical label fusion with hierarchical performance models
Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.
2014-01-01
Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy. PMID:24817809
Automated planning of ablation targets in atrial fibrillation treatment
NASA Astrophysics Data System (ADS)
Keustermans, Johannes; De Buck, Stijn; Heidbüchel, Hein; Suetens, Paul
2011-03-01
Catheter based radio-frequency ablation is used as an invasive treatment of atrial fibrillation. This procedure is often guided by the use of 3D anatomical models obtained from CT, MRI or rotational angiography. During the intervention the operator accurately guides the catheter to prespecified target ablation lines. The planning stage, however, can be time consuming and operator dependent which is suboptimal both from a cost and health perspective. Therefore, we present a novel statistical model-based algorithm for locating ablation targets from 3D rotational angiography images. Based on a training data set of 20 patients, consisting of 3D rotational angiography images with 30 manually indicated ablation points, a statistical local appearance and shape model is built. The local appearance model is based on local image descriptors to capture the intensity patterns around each ablation point. The local shape model is constructed by embedding the ablation points in an undirected graph and imposing that each ablation point only interacts with its neighbors. Identifying the ablation points on a new 3D rotational angiography image is performed by proposing a set of possible candidate locations for each ablation point, as such, converting the problem into a labeling problem. The algorithm is validated using a leave-one-out-approach on the training data set, by computing the distance between the ablation lines obtained by the algorithm and the manually identified ablation points. The distance error is equal to 3.8+/-2.9 mm. As ablation lesion size is around 5-7 mm, automated planning of ablation targets by the presented approach is sufficiently accurate.
A unified framework for image retrieval using keyword and visual features.
Jing, Feng; Li, Mingling; Zhang, Hong-Jiang; Zhang, Bo
2005-07-01
In this paper, a unified image retrieval framework based on both keyword annotations and visual features is proposed. In this framework, a set of statistical models are built based on visual features of a small set of manually labeled images to represent semantic concepts and used to propagate keywords to other unlabeled images. These models are updated periodically when more images implicitly labeled by users become available through relevance feedback. In this sense, the keyword models serve the function of accumulation and memorization of knowledge learned from user-provided relevance feedback. Furthermore, two sets of effective and efficient similarity measures and relevance feedback schemes are proposed for query by keyword scenario and query by image example scenario, respectively. Keyword models are combined with visual features in these schemes. In particular, a new, entropy-based active learning strategy is introduced to improve the efficiency of relevance feedback for query by keyword. Furthermore, a new algorithm is proposed to estimate the keyword features of the search concept for query by image example. It is shown to be more appropriate than two existing relevance feedback algorithms. Experimental results demonstrate the effectiveness of the proposed framework.
Effect of attenuation correction on image quality in emission tomography
NASA Astrophysics Data System (ADS)
Denisova, N. V.; Ondar, M. M.
2017-10-01
In this paper, mathematical modeling and computer simulations of myocardial perfusion SPECT imaging are performed. The main factors affecting the quality of reconstructed images in SPECT are anatomical structures, the diastolic volume of a myocardium and attenuation of gamma rays. The purpose of the present work is to study the effect of attenuation correction on image quality in emission tomography. The basic 2D model describing a Tc-99m distribution in a transaxial slice of the thoracic part of a patient body was designed. This model was used to construct four phantoms simulated various anatomical shapes: 2 male and 2 female patients with normal, obese and subtle physique were included in the study. Data acquisition model which includes the effect of non-uniform attenuation, collimator-detector response and Poisson statistics was developed. The projection data were calculated for 60 views in accordance with the standard myocardial perfusion SPECT imaging protocol. Reconstructions of images were performed using the OSEM algorithm which is widely used in modern SPECT systems. Two types of patient's examination procedures were simulated: SPECT without attenuation correction and SPECT/CT with attenuation correction. The obtained results indicate a significant effect of the attenuation correction on the SPECT images quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Qingge; Song, Gian; Gorti, Sarma B.
Bragg-edge imaging, which is also known as neutron radiography, has recently emerged as a novel crystalline characterization technique. Modelling of this novel technique by incorporating various features of the underlying microstructure (including the crystallographic texture, the morphological texture, and the grain size) of the material remains a subject of considerable research and development. In this paper, Inconel 718 samples made by additive manufacturing were investigated by neutron diffraction and neutron radiography techniques. The specimen features strong morphological and crystallographic textures and a highly heterogeneous microstructure. A 3D statistical full-field model is introduced by taking details of the microstructure into accountmore » to understand the experimental neutron radiography results. The Bragg-edge imaging and the total cross section were calculated based on the neutron transmission physics. A good match was obtained between the model predictions and experimental results at different incident beam angles with respect to the sample build direction. The current theoretical approach has the ability to incorporate 3D spatially resolved microstructural heterogeneity information and shows promise in understanding the 2D neutron radiography of bulk samples. With further development to incorporate the heterogeneity in lattice strain in the model, it can be used as a powerful tool in the future to better understand the neutron radiography data.« less
Xie, Qingge; Song, Gian; Gorti, Sarma B.; ...
2018-02-21
Bragg-edge imaging, which is also known as neutron radiography, has recently emerged as a novel crystalline characterization technique. Modelling of this novel technique by incorporating various features of the underlying microstructure (including the crystallographic texture, the morphological texture, and the grain size) of the material remains a subject of considerable research and development. In this paper, Inconel 718 samples made by additive manufacturing were investigated by neutron diffraction and neutron radiography techniques. The specimen features strong morphological and crystallographic textures and a highly heterogeneous microstructure. A 3D statistical full-field model is introduced by taking details of the microstructure into accountmore » to understand the experimental neutron radiography results. The Bragg-edge imaging and the total cross section were calculated based on the neutron transmission physics. A good match was obtained between the model predictions and experimental results at different incident beam angles with respect to the sample build direction. The current theoretical approach has the ability to incorporate 3D spatially resolved microstructural heterogeneity information and shows promise in understanding the 2D neutron radiography of bulk samples. With further development to incorporate the heterogeneity in lattice strain in the model, it can be used as a powerful tool in the future to better understand the neutron radiography data.« less
Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging
NASA Astrophysics Data System (ADS)
Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.
2017-03-01
Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.
Learning optimal features for visual pattern recognition
NASA Astrophysics Data System (ADS)
Labusch, Kai; Siewert, Udo; Martinetz, Thomas; Barth, Erhardt
2007-02-01
The optimal coding hypothesis proposes that the human visual system has adapted to the statistical properties of the environment by the use of relatively simple optimality criteria. We here (i) discuss how the properties of different models of image coding, i.e. sparseness, decorrelation, and statistical independence are related to each other (ii) propose to evaluate the different models by verifiable performance measures (iii) analyse the classification performance on images of handwritten digits (MNIST data base). We first employ the SPARSENET algorithm (Olshausen, 1998) to derive a local filter basis (on 13 × 13 pixels windows). We then filter the images in the database (28 × 28 pixels images of digits) and reduce the dimensionality of the resulting feature space by selecting the locally maximal filter responses. We then train a support vector machine on a training set to classify the digits and report results obtained on a separate test set. Currently, the best state-of-the-art result on the MNIST data base has an error rate of 0,4%. This result, however, has been obtained by using explicit knowledge that is specific to the data (elastic distortion model for digits). We here obtain an error rate of 0,55% which is second best but does not use explicit data specific knowledge. In particular it outperforms by far all methods that do not use data-specific knowledge.
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
Lu, Pei; Xia, Jun; Li, Zhicheng; Xiong, Jing; Yang, Jian; Zhou, Shoujun; Wang, Lei; Chen, Mingyang; Wang, Cheng
2016-11-08
Accurate segmentation of blood vessels plays an important role in the computer-aided diagnosis and interventional treatment of vascular diseases. The statistical method is an important component of effective vessel segmentation; however, several limitations discourage the segmentation effect, i.e., dependence of the image modality, uneven contrast media, bias field, and overlapping intensity distribution of the object and background. In addition, the mixture models of the statistical methods are constructed relaying on the characteristics of the image histograms. Thus, it is a challenging issue for the traditional methods to be available in vessel segmentation from multi-modality angiographic images. To overcome these limitations, a flexible segmentation method with a fixed mixture model has been proposed for various angiography modalities. Our method mainly consists of three parts. Firstly, multi-scale filtering algorithm was used on the original images to enhance vessels and suppress noises. As a result, the filtered data achieved a new statistical characteristic. Secondly, a mixture model formed by three probabilistic distributions (two Exponential distributions and one Gaussian distribution) was built to fit the histogram curve of the filtered data, where the expectation maximization (EM) algorithm was used for parameters estimation. Finally, three-dimensional (3D) Markov random field (MRF) were employed to improve the accuracy of pixel-wise classification and posterior probability estimation. To quantitatively evaluate the performance of the proposed method, two phantoms simulating blood vessels with different tubular structures and noises have been devised. Meanwhile, four clinical angiographic data sets from different human organs have been used to qualitatively validate the method. To further test the performance, comparison tests between the proposed method and the traditional ones have been conducted on two different brain magnetic resonance angiography (MRA) data sets. The results of the phantoms were satisfying, e.g., the noise was greatly suppressed, the percentages of the misclassified voxels, i.e., the segmentation error ratios, were no more than 0.3%, and the Dice similarity coefficients (DSCs) were above 94%. According to the opinions of clinical vascular specialists, the vessels in various data sets were extracted with high accuracy since complete vessel trees were extracted while lesser non-vessels and background were falsely classified as vessel. In the comparison experiments, the proposed method showed its superiority in accuracy and robustness for extracting vascular structures from multi-modality angiographic images with complicated background noises. The experimental results demonstrated that our proposed method was available for various angiographic data. The main reason was that the constructed mixture probability model could unitarily classify vessel object from the multi-scale filtered data of various angiography images. The advantages of the proposed method lie in the following aspects: firstly, it can extract the vessels with poor angiography quality, since the multi-scale filtering algorithm can improve the vessel intensity in the circumstance such as uneven contrast media and bias field; secondly, it performed well for extracting the vessels in multi-modality angiographic images despite various signal-noises; and thirdly, it was implemented with better accuracy, and robustness than the traditional methods. Generally, these traits declare that the proposed method would have significant clinical application.
Dendritic tree extraction from noisy maximum intensity projection images in C. elegans.
Greenblum, Ayala; Sznitman, Raphael; Fua, Pascal; Arratia, Paulo E; Oren, Meital; Podbilewicz, Benjamin; Sznitman, Josué
2014-06-12
Maximum Intensity Projections (MIP) of neuronal dendritic trees obtained from confocal microscopy are frequently used to study the relationship between tree morphology and mechanosensory function in the model organism C. elegans. Extracting dendritic trees from noisy images remains however a strenuous process that has traditionally relied on manual approaches. Here, we focus on automated and reliable 2D segmentations of dendritic trees following a statistical learning framework. Our dendritic tree extraction (DTE) method uses small amounts of labelled training data on MIPs to learn noise models of texture-based features from the responses of tree structures and image background. Our strategy lies in evaluating statistical models of noise that account for both the variability generated from the imaging process and from the aggregation of information in the MIP images. These noisy models are then used within a probabilistic, or Bayesian framework to provide a coarse 2D dendritic tree segmentation. Finally, some post-processing is applied to refine the segmentations and provide skeletonized trees using a morphological thinning process. Following a Leave-One-Out Cross Validation (LOOCV) method for an MIP databse with available "ground truth" images, we demonstrate that our approach provides significant improvements in tree-structure segmentations over traditional intensity-based methods. Improvements for MIPs under various imaging conditions are both qualitative and quantitative, as measured from Receiver Operator Characteristic (ROC) curves and the yield and error rates in the final segmentations. In a final step, we demonstrate our DTE approach on previously unseen MIP samples including the extraction of skeletonized structures, and compare our method to a state-of-the art dendritic tree tracing software. Overall, our DTE method allows for robust dendritic tree segmentations in noisy MIPs, outperforming traditional intensity-based methods. Such approach provides a useable segmentation framework, ultimately delivering a speed-up for dendritic tree identification on the user end and a reliable first step towards further morphological characterizations of tree arborization.
Robust kernel representation with statistical local features for face recognition.
Yang, Meng; Zhang, Lei; Shiu, Simon Chi-Keung; Zhang, David
2013-06-01
Factors such as misalignment, pose variation, and occlusion make robust face recognition a difficult problem. It is known that statistical features such as local binary pattern are effective for local feature extraction, whereas the recently proposed sparse or collaborative representation-based classification has shown interesting results in robust face recognition. In this paper, we propose a novel robust kernel representation model with statistical local features (SLF) for robust face recognition. Initially, multipartition max pooling is used to enhance the invariance of SLF to image registration error. Then, a kernel-based representation model is proposed to fully exploit the discrimination information embedded in the SLF, and robust regression is adopted to effectively handle the occlusion in face images. Extensive experiments are conducted on benchmark face databases, including extended Yale B, AR (A. Martinez and R. Benavente), multiple pose, illumination, and expression (multi-PIE), facial recognition technology (FERET), face recognition grand challenge (FRGC), and labeled faces in the wild (LFW), which have different variations of lighting, expression, pose, and occlusions, demonstrating the promising performance of the proposed method.
Research on the Hotel Image Based on the Detail Service
NASA Astrophysics Data System (ADS)
Li, Ban; Shenghua, Zheng; He, Yi
Detail service management, initially developed as marketing programs to enhance customer loyalty, has now become an important part of customer relation strategy. This paper analyzes the critical factors of detail service and its influence on the hotel image. We establish the theoretical model of influencing factors on hotel image and propose corresponding hypotheses. We use applying statistical method to test and verify the above-mentioned hypotheses. This paper provides a foundation for further study of detail service design and planning issues.
Hu, Yipeng; Morgan, Dominic; Ahmed, Hashim Uddin; Pendsé, Doug; Sahu, Mahua; Allen, Clare; Emberton, Mark; Hawkes, David; Barratt, Dean
2008-01-01
A method is described for generating a patient-specific, statistical motion model (SMM) of the prostate gland. Finite element analysis (FEA) is used to simulate the motion of the gland using an ultrasound-based 3D FE model over a range of plausible boundary conditions and soft-tissue properties. By applying principal component analysis to the displacements of the FE mesh node points inside the gland, the simulated deformations are then used as training data to construct the SMM. The SMM is used to both predict the displacement field over the whole gland and constrain a deformable surface registration algorithm, given only a small number of target points on the surface of the deformed gland. Using 3D transrectal ultrasound images of the prostates of five patients, acquired before and after imposing a physical deformation, to evaluate the accuracy of predicted landmark displacements, the mean target registration error was found to be less than 1.9 mm.
A dictionary learning approach for Poisson image deblurring.
Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong
2013-07-01
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
The mean field theory in EM procedures for blind Markov random field image restoration.
Zhang, J
1993-01-01
A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing.
Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.
2016-01-01
Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078
Image informative maps for component-wise estimating parameters of signal-dependent noise
NASA Astrophysics Data System (ADS)
Uss, Mykhail L.; Vozel, Benoit; Lukin, Vladimir V.; Chehdi, Kacem
2013-01-01
We deal with the problem of blind parameter estimation of signal-dependent noise from mono-component image data. Multispectral or color images can be processed in a component-wise manner. The main results obtained rest on the assumption that the image texture and noise parameters estimation problems are interdependent. A two-dimensional fractal Brownian motion (fBm) model is used for locally describing image texture. A polynomial model is assumed for the purpose of describing the signal-dependent noise variance dependence on image intensity. Using the maximum likelihood approach, estimates of both fBm-model and noise parameters are obtained. It is demonstrated that Fisher information (FI) on noise parameters contained in an image is distributed nonuniformly over intensity coordinates (an image intensity range). It is also shown how to find the most informative intensities and the corresponding image areas for a given noisy image. The proposed estimator benefits from these detected areas to improve the estimation accuracy of signal-dependent noise parameters. Finally, the potential estimation accuracy (Cramér-Rao Lower Bound, or CRLB) of noise parameters is derived, providing confidence intervals of these estimates for a given image. In the experiment, the proposed and existing state-of-the-art noise variance estimators are compared for a large image database using CRLB-based statistical efficiency criteria.
Assessment of corneal properties based on statistical modeling of OCT speckle
Jesus, Danilo A.; Iskander, D. Robert
2016-01-01
A new approach to assess the properties of the corneal micro-structure in vivo based on the statistical modeling of speckle obtained from Optical Coherence Tomography (OCT) is presented. A number of statistical models were proposed to fit the corneal speckle data obtained from OCT raw image. Short-term changes in corneal properties were studied by inducing corneal swelling whereas age-related changes were observed analyzing data of sixty-five subjects aged between twenty-four and seventy-three years. Generalized Gamma distribution has shown to be the best model, in terms of the Akaike’s Information Criterion, to fit the OCT corneal speckle. Its parameters have shown statistically significant differences (Kruskal-Wallis, p < 0.001) for short and age-related corneal changes. In addition, it was observed that age-related changes influence the corneal biomechanical behaviour when corneal swelling is induced. This study shows that Generalized Gamma distribution can be utilized to modeling corneal speckle in OCT in vivo providing complementary quantified information where micro-structure of corneal tissue is of essence. PMID:28101409
Direct atomic force microscopy observation of DNA tile crystal growth at the single-molecule level.
Evans, Constantine G; Hariadi, Rizal F; Winfree, Erik
2012-06-27
While the theoretical implications of models of DNA tile self-assembly have been extensively researched and such models have been used to design DNA tile systems for use in experiments, there has been little research testing the fundamental assumptions of those models. In this paper, we use direct observation of individual tile attachments and detachments of two DNA tile systems on a mica surface imaged with an atomic force microscope (AFM) to compile statistics of tile attachments and detachments. We show that these statistics fit the widely used kinetic Tile Assembly Model and demonstrate AFM movies as a viable technique for directly investigating DNA tile systems during growth rather than after assembly.
Xu, Yifan; Sun, Jiayang; Carter, Rebecca R; Bogie, Kath M
2014-05-01
Stereophotogrammetric digital imaging enables rapid and accurate detailed 3D wound monitoring. This rich data source was used to develop a statistically validated model to provide personalized predictive healing information for chronic wounds. 147 valid wound images were obtained from a sample of 13 category III/IV pressure ulcers from 10 individuals with spinal cord injury. Statistical comparison of several models indicated the best fit for the clinical data was a personalized mixed-effects exponential model (pMEE), with initial wound size and time as predictors and observed wound size as the response variable. Random effects capture personalized differences. Other models are only valid when wound size constantly decreases. This is often not achieved for clinical wounds. Our model accommodates this reality. Two criteria to determine effective healing time outcomes are proposed: r-fold wound size reduction time, t(r-fold), is defined as the time when wound size reduces to 1/r of initial size. t(δ) is defined as the time when the rate of the wound healing/size change reduces to a predetermined threshold δ < 0. Healing rate differs from patient to patient. Model development and validation indicates that accurate monitoring of wound geometry can adaptively predict healing progression and that larger wounds heal more rapidly. Accuracy of the prediction curve in the current model improves with each additional evaluation. Routine assessment of wounds using detailed stereophotogrammetric imaging can provide personalized predictions of wound healing time. Application of a valid model will help the clinical team to determine wound management care pathways. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, David, E-mail: dhthomas@mednet.ucla.edu; Lamb, James; White, Benjamin
2014-05-01
Purpose: To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Methods and Materials: Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. Themore » tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Results: Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. Conclusions: The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact–free images at a patient dose similar to or less than current 4D-CT techniques.« less
Thomas, David; Lamb, James; White, Benjamin; Jani, Shyam; Gaudio, Sergio; Lee, Percy; Ruan, Dan; McNitt-Gray, Michael; Low, Daniel
2014-05-01
To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. The tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact-free images at a patient dose similar to or less than current 4D-CT techniques. Copyright © 2014 Elsevier Inc. All rights reserved.
Post-modelling of images from a laser-induced wavy boiling front
NASA Astrophysics Data System (ADS)
Matti, R. S.; Kaplan, A. F. H.
2015-12-01
Processes like laser keyhole welding, remote fusion laser cutting or laser drilling are governed by a highly dynamic wavy boiling front that was recently recorded by ultra-high speed imaging. A new approach has now been established by post-modelling of the high speed images. Based on the image greyscale and on a cavity model the three-dimensional front topology is reconstructed. As a second step the Fresnel absorptivity modulation across the wavy front is calculated, combined with the local projection of the laser beam. Frequency polygons enable additional analysis of the statistical variations of the properties across the front. Trends like shadow formation and time dependency can be studied, locally and for the whole front. Despite strong topology modulation in space and time, for lasers with 1 μm wavelength and steel the absorptivity is bounded to a narrow range of 35-43%, owing to its Fresnel characteristics.
Statistical wiring of thalamic receptive fields optimizes spatial sampling of the retinal image
Wang, Xin; Sommer, Friedrich T.; Hirsch, Judith A.
2014-01-01
Summary It is widely assumed that mosaics of retinal ganglion cells establish the optimal representation of visual space. However, relay cells in the visual thalamus often receive convergent input from several retinal afferents and, in cat, outnumber ganglion cells. To explore how the thalamus transforms the retinal image, we built a model of the retinothalamic circuit using experimental data and simple wiring rules. The model shows how the thalamus might form a resampled map of visual space with the potential to facilitate detection of stimulus position in the presence of sensor noise. Bayesian decoding conducted with the model provides support for this scenario. Despite its benefits, however, resampling introduces image blur, thus impairing edge perception. Whole-cell recordings obtained in vivo suggest that this problem is mitigated by arrangements of excitation and inhibition within the receptive field that effectively boost contrast borders, much like strategies used in digital image processing. PMID:24559681
Real-world visual statistics and infants' first-learned object names.
Clerkin, Elizabeth M; Hart, Elizabeth; Rehg, James M; Yu, Chen; Smith, Linda B
2017-01-05
We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).
Association between pathology and texture features of multi parametric MRI of the prostate
NASA Astrophysics Data System (ADS)
Kuess, Peter; Andrzejewski, Piotr; Nilsson, David; Georg, Petra; Knoth, Johannes; Susani, Martin; Trygg, Johan; Helbich, Thomas H.; Polanec, Stephan H.; Georg, Dietmar; Nyholm, Tufve
2017-10-01
The role of multi-parametric (mp)MRI in the diagnosis and treatment of prostate cancer has increased considerably. An alternative to visual inspection of mpMRI is the evaluation using histogram-based (first order statistics) parameters and textural features (second order statistics). The aims of the present work were to investigate the relationship between benign and malignant sub-volumes of the prostate and textures obtained from mpMR images. The performance of tumor prediction was investigated based on the combination of histogram-based and textural parameters. Subsequently, the relative importance of mpMR images was assessed and the benefit of additional imaging analyzed. Finally, sub-structures based on the PI-RADS classification were investigated as potential regions to automatically detect maligned lesions. Twenty-five patients who received mpMRI prior to radical prostatectomy were included in the study. The imaging protocol included T2, DWI, and DCE. Delineation of tumor regions was performed based on pathological information. First and second order statistics were derived from each structure and for all image modalities. The resulting data were processed with multivariate analysis, using PCA (principal component analysis) and OPLS-DA (orthogonal partial least squares discriminant analysis) for separation of malignant and healthy tissue. PCA showed a clear difference between tumor and healthy regions in the peripheral zone for all investigated images. The predictive ability of the OPLS-DA models increased for all image modalities when first and second order statistics were combined. The predictive value reached a plateau after adding ADC and T2, and did not increase further with the addition of other image information. The present study indicates a distinct difference in the signatures between malign and benign prostate tissue. This is an absolute prerequisite for automatic tumor segmentation, but only the first step in that direction. For the specific identified signature, DCE did not add complementary information to T2 and ADC maps.
Lu, Lee-Jane W.; Nishino, Thomas K.; Khamapirad, Tuenchit; Grady, James J; Leonard, Morton H.; Brunder, Donald G.
2009-01-01
Breast density (the percentage of fibroglandular tissue in the breast) has been suggested to be a useful surrogate marker for breast cancer risk. It is conventionally measured using screen-film mammographic images by a labor intensive histogram segmentation method (HSM). We have adapted and modified the HSM for measuring breast density from raw digital mammograms acquired by full-field digital mammography. Multiple regression model analyses showed that many of the instrument parameters for acquiring the screening mammograms (e.g. breast compression thickness, radiological thickness, radiation dose, compression force, etc) and image pixel intensity statistics of the imaged breasts were strong predictors of the observed threshold values (model R2=0.93) and %density (R2=0.84). The intra-class correlation coefficient of the %-density for duplicate images was estimated to be 0.80, using the regression model-derived threshold values, and 0.94 if estimated directly from the parameter estimates of the %-density prediction regression model. Therefore, with additional research, these mathematical models could be used to compute breast density objectively, automatically bypassing the HSM step, and could greatly facilitate breast cancer research studies. PMID:17671343
Feature and Intensity Based Medical Image Registration Using Particle Swarm Optimization.
Abdel-Basset, Mohamed; Fakhry, Ahmed E; El-Henawy, Ibrahim; Qiu, Tie; Sangaiah, Arun Kumar
2017-11-03
Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.
Humans make efficient use of natural image statistics when performing spatial interpolation.
D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S
2013-12-16
Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.
Improvement and extension of a radar forest backscattering model
NASA Technical Reports Server (NTRS)
Simonett, David S.; Wang, Yong
1989-01-01
Radar modeling of mangal forest stands, in the Sundarbans area of Southern Bangladesh, was developed. The modeling employs radar system parameters with forest data on tree height, spacing, biomass, species combinations, and water (including slightly conductive water), content both in leaves and trunks of the mangal. For Sundri and Gewa tropical mangal forests, six model components are proposed, which are required to explain the contributions of various forest species combinations in the attenuation and scattering of mangal vegetated nonflooded or flooded surfaces. Statistical data of simulated images were compared with those of SIR-B images both to refine the modeling procedures and to appropriately characterize the model output. The possibility of delineation of flooded or nonflooded boundaries is discussed.
Ziegler, Ronny; Brendel, Bernhard; Rinneberg, Herbert; Nielsen, Tim
2009-01-21
Using a statistical (chi-square) test on simulated data and a realistic noise model derived from the system's hardware we study the performance of diffuse optical tomography systems for fluorescence imaging. We compare the predicted smallest size of detectable lesions at various positions in slab and cup geometry and model how detection sensitivity depends on breast compression and lesion fluorescence contrast. Our investigation shows that lesion detection is limited by relative noise in slab geometry and by absolute noise in cup geometry.
2015-06-01
exposure settings…………………...26 Table 4. Kodak 9500 Cone Beam 3D System exposure settings…………..….27 Table 5. Average and statistical analysis results...42 Figure 6 Image of Mounted PVC Skull Model on the Kodak 9500……….…......43 Figure 7 Screen image of Reconstructed CBCT Digital...replica was taken with the Kodak 9500 Cone Beam 3D System. To create the digital dental models fifteen type IV maxillary dental casts were made on the
NASA Astrophysics Data System (ADS)
Yin, Yanshu; Feng, Wenjie
2017-12-01
In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.
NASA Astrophysics Data System (ADS)
Hapca, Simona
2015-04-01
Many soil properties and functions emerge from interactions of physical, chemical and biological processes at microscopic scales, which can be understood only by integrating techniques that traditionally are developed within separate disciplines. While recent advances in imaging techniques, such as X-ray computed tomography (X-ray CT), offer the possibility to reconstruct the 3D physical structure at fine resolutions, for the distribution of chemicals in soil, existing methods, based on scanning electron microscope (SEM) and energy dispersive X-ray detection (EDX), allow for characterization of the chemical composition only on 2D surfaces. At present, direct 3D measurement techniques are still lacking, sequential sectioning of soils, followed by 2D mapping of chemical elements and interpolation to 3D, being an alternative which is explored in this study. Specifically, we develop an integrated experimental and theoretical framework which combines 3D X-ray CT imaging technique with 2D SEM-EDX and use spatial statistics methods to map the chemical composition of soil in 3D. The procedure involves three stages 1) scanning a resin impregnated soil cube by X-ray CT, followed by precision cutting to produce parallel thin slices, the surfaces of which are scanned by SEM-EDX, 2) alignment of the 2D chemical maps within the internal 3D structure of the soil cube, and 3) development, of spatial statistics methods to predict the chemical composition of 3D soil based on the observed 2D chemical and 3D physical data. Specifically, three statistical models consisting of a regression tree, a regression tree kriging and cokriging model were used to predict the 3D spatial distribution of carbon, silicon, iron and oxygen in soil, these chemical elements showing a good spatial agreement between the X-ray grayscale intensities and the corresponding 2D SEM-EDX data. Due to the spatial correlation between the physical and chemical data, the regression-tree model showed a great potential in predicting chemical composition in particular for iron, which is generally sparsely distributed in soil. For carbon, silicon and oxygen, which are more densely distributed, the additional kriging of the regression tree residuals improved significantly the prediction, whereas prediction based on co-kriging was less consistent across replicates, underperforming regression-tree kriging. The present study shows a great potential in integrating geo-statistical methods with imaging techniques to unveil the 3D chemical structure of soil at very fine scales, the framework being suitable to be further applied to other types of imaging data such as images of biological thin sections for characterization of microbial distribution. Key words: X-ray CT, SEM-EDX, segmentation techniques, spatial correlation, 3D soil images, 2D chemical maps.
Statistical shape modeling of human cochlea: alignment and principal component analysis
NASA Astrophysics Data System (ADS)
Poznyakovskiy, Anton A.; Zahnert, Thomas; Fischer, Björn; Lasurashvili, Nikoloz; Kalaidzidis, Yannis; Mürbe, Dirk
2013-02-01
The modeling of the cochlear labyrinth in living subjects is hampered by insufficient resolution of available clinical imaging methods. These methods usually provide resolutions higher than 125 μm. This is too crude to record the position of basilar membrane and, as a result, keep apart even the scala tympani from other scalae. This problem could be avoided by the means of atlas-based segmentation. The specimens can endure higher radiation loads and, conversely, provide better-resolved images. The resulting surface can be used as the seed for atlas-based segmentation. To serve this purpose, we have developed a statistical shape model (SSM) of human scala tympani based on segmentations obtained from 10 μCT image stacks. After segmentation, we aligned the resulting surfaces using Procrustes alignment. This algorithm was slightly modified to accommodate single models with nodes which do not necessarily correspond to salient features and vary in number between models. We have established correspondence by mutual proximity between nodes. Rather than using the standard Euclidean norm, we have applied an alternative logarithmic norm to improve outlier treatment. The minimization was done using BFGS method. We have also split the surface nodes along an octree to reduce computation cost. Subsequently, we have performed the principal component analysis of the training set with Jacobi eigenvalue algorithm. We expect the resulting method to help acquiring not only better understanding in interindividual variations of cochlear anatomy, but also a step towards individual models for pre-operative diagnostics prior to cochlear implant insertions.
Model-based image analysis of a tethered Brownian fibre for shear stress sensing
2017-01-01
The measurement of fluid dynamic shear stress acting on a biologically relevant surface is a challenging problem, particularly in the complex environment of, for example, the vasculature. While an experimental method for the direct detection of wall shear stress via the imaging of a synthetic biology nanorod has recently been developed, the data interpretation so far has been limited to phenomenological random walk modelling, small-angle approximation, and image analysis techniques which do not take into account the production of an image from a three-dimensional subject. In this report, we develop a mathematical and statistical framework to estimate shear stress from rapid imaging sequences based firstly on stochastic modelling of the dynamics of a tethered Brownian fibre in shear flow, and secondly on a novel model-based image analysis, which reconstructs fibre positions by solving the inverse problem of image formation. This framework is tested on experimental data, providing the first mechanistically rational analysis of the novel assay. What follows further develops the established theory for an untethered particle in a semi-dilute suspension, which is of relevance to, for example, the study of Brownian nanowires without flow, and presents new ideas in the field of multi-disciplinary image analysis. PMID:29212755
Park, Hae-Jeong; Kwon, Jun Soo; Youn, Tak; Pae, Ji Soo; Kim, Jae-Jin; Kim, Myung-Sun; Ha, Kyoo-Seob
2002-11-01
We describe a method for the statistical parametric mapping of low resolution electromagnetic tomography (LORETA) using high-density electroencephalography (EEG) and individual magnetic resonance images (MRI) to investigate the characteristics of the mismatch negativity (MMN) generators in schizophrenia. LORETA, using a realistic head model of the boundary element method derived from the individual anatomy, estimated the current density maps from the scalp topography of the 128-channel EEG. From the current density maps that covered the whole cortical gray matter (up to 20,000 points), volumetric current density images were reconstructed. Intensity normalization of the smoothed current density images was used to reduce the confounding effect of subject specific global activity. After transforming each image into a standard stereotaxic space, we carried out statistical parametric mapping of the normalized current density images. We applied this method to the source localization of MMN in schizophrenia. The MMN generators, produced by a deviant tone of 1,200 Hz (5% of 1,600 trials) under the standard tone of 1,000 Hz, 80 dB binaural stimuli with 300 msec of inter-stimulus interval, were measured in 14 right-handed schizophrenic subjects and 14 age-, gender-, and handedness-matched controls. We found that the schizophrenic group exhibited significant current density reductions of MMN in the left superior temporal gyrus and the left inferior parietal gyrus (P < 0. 0005). This study is the first voxel-by-voxel statistical mapping of current density using individual MRI and high-density EEG. Copyright 2002 Wiley-Liss, Inc.
New auto-segment method of cerebral hemorrhage
NASA Astrophysics Data System (ADS)
Wang, Weijiang; Shen, Tingzhi; Dang, Hua
2007-12-01
A novel method for Computerized tomography (CT) cerebral hemorrhage (CH) image automatic segmentation is presented in the paper, which uses expert system that models human knowledge about the CH automatic segmentation problem. The algorithm adopts a series of special steps and extracts some easy ignored CH features which can be found by statistic results of mass real CH images, such as region area, region CT number, region smoothness and some statistic CH region relationship. And a seven steps' extracting mechanism will ensure these CH features can be got correctly and efficiently. By using these CH features, a decision tree which models the human knowledge about the CH automatic segmentation problem has been built and it will ensure the rationality and accuracy of the algorithm. Finally some experiments has been taken to verify the correctness and reasonable of the automatic segmentation, and the good correct ratio and fast speed make it possible to be widely applied into practice.
Incorporating spatial context into statistical classification of multidimensional image data
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Tilton, J. C.; Swain, P. H.
1981-01-01
Compound decision theory is employed to develop a general statistical model for classifying image data using spatial context. The classification algorithm developed from this model exploits the tendency of certain ground-cover classes to occur more frequently in some spatial contexts than in others. A key input to this contextural classifier is a quantitative characterization of this tendency: the context function. Several methods for estimating the context function are explored, and two complementary methods are recommended. The contextural classifier is shown to produce substantial improvements in classification accuracy compared to the accuracy produced by a non-contextural uniform-priors maximum likelihood classifier when these methods of estimating the context function are used. An approximate algorithm, which cuts computational requirements by over one-half, is presented. The search for an optimal implementation is furthered by an exploration of the relative merits of using spectral classes or information classes for classification and/or context function estimation.
A Method for Retrieving Ground Flash Fraction from Satellite Lightning Imager Data
NASA Technical Reports Server (NTRS)
Koshak, William J.
2009-01-01
A general theory for retrieving the fraction of ground flashes in N lightning observed by a satellite-based lightning imager is provided. An "exponential model" is applied as a physically reasonable constraint to describe the measured optical parameter distributions, and population statistics (i.e., mean, variance) are invoked to add additional constraints to the retrieval process. The retrieval itself is expressed in terms of a Bayesian inference, and the Maximum A Posteriori (MAP) solution is obtained. The approach is tested by performing simulated retrievals, and retrieval error statistics are provided. The ability to retrieve ground flash fraction has important benefits to the atmospheric chemistry community. For example, using the method to partition the existing satellite global lightning climatology into separate ground and cloud flash climatologies will improve estimates of lightning nitrogen oxides (NOx) production; this in turn will improve both regional air quality and global chemistry/climate model predictions.
Yothers, Mitchell P; Browder, Aaron E; Bumm, Lloyd A
2017-01-01
We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.
NASA Astrophysics Data System (ADS)
Yothers, Mitchell P.; Browder, Aaron E.; Bumm, Lloyd A.
2017-01-01
We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.
Stochastic performance modeling and evaluation of obstacle detectability with imaging range sensors
NASA Technical Reports Server (NTRS)
Matthies, Larry; Grandjean, Pierrick
1993-01-01
Statistical modeling and evaluation of the performance of obstacle detection systems for Unmanned Ground Vehicles (UGVs) is essential for the design, evaluation, and comparison of sensor systems. In this report, we address this issue for imaging range sensors by dividing the evaluation problem into two levels: quality of the range data itself and quality of the obstacle detection algorithms applied to the range data. We review existing models of the quality of range data from stereo vision and AM-CW LADAR, then use these to derive a new model for the quality of a simple obstacle detection algorithm. This model predicts the probability of detecting obstacles and the probability of false alarms, as a function of the size and distance of the obstacle, the resolution of the sensor, and the level of noise in the range data. We evaluate these models experimentally using range data from stereo image pairs of a gravel road with known obstacles at several distances. The results show that the approach is a promising tool for predicting and evaluating the performance of obstacle detection with imaging range sensors.
Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification.
Wang, Shouyi; Bowen, Stephen R; Chaovalitwongse, W Art; Sandison, George A; Grabowski, Thomas J; Kinahan, Paul E
2014-02-21
The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUV(peak)) over lesions of interest. Relative differences in SUV(peak) between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUV(peak) values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.
Neural Network for Nanoscience Scanning Electron Microscope Image Recognition.
Modarres, Mohammad Hadi; Aversa, Rossella; Cozzini, Stefano; Ciancio, Regina; Leto, Angelo; Brandino, Giuseppe Piero
2017-10-16
In this paper we applied transfer learning techniques for image recognition, automatic categorization, and labeling of nanoscience images obtained by scanning electron microscope (SEM). Roughly 20,000 SEM images were manually classified into 10 categories to form a labeled training set, which can be used as a reference set for future applications of deep learning enhanced algorithms in the nanoscience domain. The categories chosen spanned the range of 0-Dimensional (0D) objects such as particles, 1D nanowires and fibres, 2D films and coated surfaces, and 3D patterned surfaces such as pillars. The training set was used to retrain on the SEM dataset and to compare many convolutional neural network models (Inception-v3, Inception-v4, ResNet). We obtained compatible results by performing a feature extraction of the different models on the same dataset. We performed additional analysis of the classifier on a second test set to further investigate the results both on particular cases and from a statistical point of view. Our algorithm was able to successfully classify around 90% of a test dataset consisting of SEM images, while reduced accuracy was found in the case of images at the boundary between two categories or containing elements of multiple categories. In these cases, the image classification did not identify a predominant category with a high score. We used the statistical outcomes from testing to deploy a semi-automatic workflow able to classify and label images generated by the SEM. Finally, a separate training was performed to determine the volume fraction of coherently aligned nanowires in SEM images. The results were compared with what was obtained using the Local Gradient Orientation method. This example demonstrates the versatility and the potential of transfer learning to address specific tasks of interest in nanoscience applications.
Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification
NASA Astrophysics Data System (ADS)
Wang, Shouyi; Bowen, Stephen R.; Chaovalitwongse, W. Art; Sandison, George A.; Grabowski, Thomas J.; Kinahan, Paul E.
2014-02-01
The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUVpeak) over lesions of interest. Relative differences in SUVpeak between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUVpeak values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.
NASA Astrophysics Data System (ADS)
León, Madeleine; Escalante-Ramirez, Boris
2013-11-01
Knee osteoarthritis (OA) is characterized by the morphological degeneration of cartilage. Efficient segmentation of cartilage is important for cartilage damage diagnosis and to support therapeutic responses. We present a method for knee cartilage segmentation in magnetic resonance images (MRI). Our method incorporates the Hermite Transform to obtain a hierarchical decomposition of contours which describe knee cartilage shapes. Then, we compute a statistical model of the contour of interest from a set of training images. Thereby, our Hierarchical Active Shape Model (HASM) captures a large range of shape variability even from a small group of training samples, improving segmentation accuracy. The method was trained with a training set of 16- MRI of knee and tested with leave-one-out method.
Two-dimensional signal processing with application to image restoration
NASA Technical Reports Server (NTRS)
Assefi, T.
1974-01-01
A recursive technique for modeling and estimating a two-dimensional signal contaminated by noise is presented. A two-dimensional signal is assumed to be an undistorted picture, where the noise introduces the distortion. Both the signal and the noise are assumed to be wide-sense stationary processes with known statistics. Thus, to estimate the two-dimensional signal is to enhance the picture. The picture representing the two-dimensional signal is converted to one dimension by scanning the image horizontally one line at a time. The scanner output becomes a nonstationary random process due to the periodic nature of the scanner operation. Procedures to obtain a dynamical model corresponding to the autocorrelation function of the scanner output are derived. Utilizing the model, a discrete Kalman estimator is designed to enhance the image.
Statistical modeling and MAP estimation for body fat quantification with MRI ratio imaging
NASA Astrophysics Data System (ADS)
Wong, Wilbur C. K.; Johnson, David H.; Wilson, David L.
2008-03-01
We are developing small animal imaging techniques to characterize the kinetics of lipid accumulation/reduction of fat depots in response to genetic/dietary factors associated with obesity and metabolic syndromes. Recently, we developed an MR ratio imaging technique that approximately yields lipid/{lipid + water}. In this work, we develop a statistical model for the ratio distribution that explicitly includes a partial volume (PV) fraction of fat and a mixture of a Rician and multiple Gaussians. Monte Carlo hypothesis testing showed that our model was valid over a wide range of coefficient of variation of the denominator distribution (c.v.: 0-0:20) and correlation coefficient among the numerator and denominator (ρ 0-0.95), which cover the typical values that we found in MRI data sets (c.v.: 0:027-0:063, ρ: 0:50-0:75). Then a maximum a posteriori (MAP) estimate for the fat percentage per voxel is proposed. Using a digital phantom with many PV voxels, we found that ratio values were not linearly related to PV fat content and that our method accurately described the histogram. In addition, the new method estimated the ground truth within +1.6% vs. +43% for an approach using an uncorrected ratio image, when we simply threshold the ratio image. On the six genetically obese rat data sets, the MAP estimate gave total fat volumes of 279 +/- 45mL, values 21% smaller than those from the uncorrected ratio images, principally due to the non-linear PV effect. We conclude that our algorithm can increase the accuracy of fat volume quantification even in regions having many PV voxels, e.g. ectopic fat depots.
A Markov model for blind image separation by a mean-field EM algorithm.
Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele
2006-02-01
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.
NASA Technical Reports Server (NTRS)
1980-01-01
MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.
A statistical method for lung tumor segmentation uncertainty in PET images based on user inference.
Zheng, Chaojie; Wang, Xiuying; Feng, Dagan
2015-01-01
PET has been widely accepted as an effective imaging modality for lung tumor diagnosis and treatment. However, standard criteria for delineating tumor boundary from PET are yet to develop largely due to relatively low quality of PET images, uncertain tumor boundary definition, and variety of tumor characteristics. In this paper, we propose a statistical solution to segmentation uncertainty on the basis of user inference. We firstly define the uncertainty segmentation band on the basis of segmentation probability map constructed from Random Walks (RW) algorithm; and then based on the extracted features of the user inference, we use Principle Component Analysis (PCA) to formulate the statistical model for labeling the uncertainty band. We validated our method on 10 lung PET-CT phantom studies from the public RIDER collections [1] and 16 clinical PET studies where tumors were manually delineated by two experienced radiologists. The methods were validated using Dice similarity coefficient (DSC) to measure the spatial volume overlap. Our method achieved an average DSC of 0.878 ± 0.078 on phantom studies and 0.835 ± 0.039 on clinical studies.
Wang, Jiahui; Fan, Zheng; Vandenborne, Krista; Walter, Glenn; Shiloh-Malawsky, Yael; An, Hongyu; Kornegay, Joe N; Styner, Martin A
2013-09-01
Golden retriever muscular dystrophy (GRMD) is a widely used canine model of Duchenne muscular dystrophy (DMD). Recent studies have shown that magnetic resonance imaging (MRI) can be used to non-invasively detect consistent changes in both DMD and GRMD. In this paper, we propose a semiautomated system to quantify MRI biomarkers of GRMD. Our system was applied to a database of 45 MRI scans from 8 normal and 10 GRMD dogs in a longitudinal natural history study. We first segmented six proximal pelvic limb muscles using a semiautomated full muscle segmentation method. We then performed preprocessing, including intensity inhomogeneity correction, spatial registration of different image sequences, intensity calibration of T2-weighted and T2-weighted fat-suppressed images, and calculation of MRI biomarker maps. Finally, for each of the segmented muscles, we automatically measured MRI biomarkers of muscle volume, intensity statistics over MRI biomarker maps, and statistical image texture features. The muscle volume and the mean intensities in T2 value, fat, and water maps showed group differences between normal and GRMD dogs. For the statistical texture biomarkers, both the histogram and run-length matrix features showed obvious group differences between normal and GRMD dogs. The full muscle segmentation showed significantly less error and variability in the proposed biomarkers when compared to the standard, limited muscle range segmentation. The experimental results demonstrated that this quantification tool could reliably quantify MRI biomarkers in GRMD dogs, suggesting that it would also be useful for quantifying disease progression and measuring therapeutic effect in DMD patients.
Structure-Specific Statistical Mapping of White Matter Tracts
Yushkevich, Paul A.; Zhang, Hui; Simon, Tony; Gee, James C.
2008-01-01
We present a new model-based framework for the statistical analysis of diffusion imaging data associated with specific white matter tracts. The framework takes advantage of the fact that several of the major white matter tracts are thin sheet-like structures that can be effectively modeled by medial representations. The approach involves segmenting major tracts and fitting them with deformable geometric medial models. The medial representation makes it possible to average and combine tensor-based features along directions locally perpendicular to the tracts, thus reducing data dimensionality and accounting for errors in normalization. The framework enables the analysis of individual white matter structures, and provides a range of possibilities for computing statistics and visualizing differences between cohorts. The framework is demonstrated in a study of white matter differences in pediatric chromosome 22q11.2 deletion syndrome. PMID:18407524
Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye
2017-10-01
Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chen, Qing; Xu, Pengfei; Liu, Wenzhong
2016-01-01
Computer vision as a fast, low-cost, noncontact, and online monitoring technology has been an important tool to inspect product quality, particularly on a large-scale assembly production line. However, the current industrial vision system is far from satisfactory in the intelligent perception of complex grain images, comprising a large number of local homogeneous fragmentations or patches without distinct foreground and background. We attempt to solve this problem based on the statistical modeling of spatial structures of grain images. We present a physical explanation in advance to indicate that the spatial structures of the complex grain images are subject to a representative Weibull distribution according to the theory of sequential fragmentation, which is well known in the continued comminution of ore grinding. To delineate the spatial structure of the grain image, we present a method of multiscale and omnidirectional Gaussian derivative filtering. Then, a product quality classifier based on sparse multikernel–least squares support vector machine is proposed to solve the low-confidence classification problem of imbalanced data distribution. The proposed method is applied on the assembly line of a food-processing enterprise to classify (or identify) automatically the production quality of rice. The experiments on the real application case, compared with the commonly used methods, illustrate the validity of our method. PMID:26986726
Precht, Helle; Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess
2016-12-01
Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR ( P = 0.004). The objective measures showed significant differences between FBP and 60% ASIR ( P < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR.
NASA Astrophysics Data System (ADS)
Scanlan, Neil W.; Schott, John R.; Brown, Scott D.
2004-01-01
Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of "ground truthed" images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three peformance metrics that have been derived from spatial Gray Level Co-Occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery.
NASA Astrophysics Data System (ADS)
Scanlan, Neil W.; Schott, John R.; Brown, Scott D.
2003-12-01
Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of "ground truthed" images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three peformance metrics that have been derived from spatial Gray Level Co-Occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery.
Correction of defective pixels for medical and space imagers based on Ising Theory
NASA Astrophysics Data System (ADS)
Cohen, Eliahu; Shnitser, Moriel; Avraham, Tsvika; Hadar, Ofer
2014-09-01
We propose novel models for image restoration based on statistical physics. We investigate the affinity between these fields and describe a framework from which interesting denoising algorithms can be derived: Ising-like models and simulated annealing techniques. When combined with known predictors such as Median and LOCO-I, these models become even more effective. In order to further examine the proposed models we apply them to two important problems: (i) Digital Cameras in space damaged from cosmic radiation. (ii) Ultrasonic medical devices damaged from speckle noise. The results, as well as benchmark and comparisons, suggest in most of the cases a significant gain in PSNR and SSIM in comparison to other filters.
Statistical image reconstruction from correlated data with applications to PET
Alessio, Adam; Sauer, Ken; Kinahan, Paul
2008-01-01
Most statistical reconstruction methods for emission tomography are designed for data modeled as conditionally independent Poisson variates. In reality, due to scanner detectors, electronics and data processing, correlations are introduced into the data resulting in dependent variates. In general, these correlations are ignored because they are difficult to measure and lead to computationally challenging statistical reconstruction algorithms. This work addresses the second concern, seeking to simplify the reconstruction of correlated data and provide a more precise image estimate than the conventional independent methods. In general, correlated variates have a large non-diagonal covariance matrix that is computationally challenging to use as a weighting term in a reconstruction algorithm. This work proposes two methods to simplify the use of a non-diagonal covariance matrix as the weighting term by (a) limiting the number of dimensions in which the correlations are modeled and (b) adopting flexible, yet computationally tractable, models for correlation structure. We apply and test these methods with simple simulated PET data and data processed with the Fourier rebinning algorithm which include the one-dimensional correlations in the axial direction and the two-dimensional correlations in the transaxial directions. The methods are incorporated into a penalized weighted least-squares 2D reconstruction and compared with a conventional maximum a posteriori approach. PMID:17921576
The image of mathematics held by Irish post-primary students
NASA Astrophysics Data System (ADS)
Lane, Ciara; Stynes, Martin; O'Donoghue, John
2014-08-01
The image of mathematics held by Irish post-primary students was examined and a model for the image found was constructed. Initially, a definition for 'image of mathematics' was adopted with image of mathematics hypothesized as comprising attitudes, beliefs, self-concept, motivation, emotions and past experiences of mathematics. Research focused on students studying ordinary level mathematics for the Irish Leaving Certificate examination - the final examination for students in second-level or post-primary education. Students were aged between 15 and 18 years. A questionnaire was constructed with both quantitative and qualitative aspects. The questionnaire survey was completed by 356 post-primary students. Responses were analysed quantitatively using Statistical Package for the Social Sciences (SPSS) and qualitatively using the constant comparative method of analysis and by reviewing individual responses. Findings provide an insight into Irish post-primary students' images of mathematics and offer a means for constructing a theoretical model of image of mathematics which could be beneficial for future research.
Mammographic texture synthesis using genetic programming and clustered lumpy background
NASA Astrophysics Data System (ADS)
Castella, Cyril; Kinkel, Karen; Descombes, François; Eckstein, Miguel P.; Sottas, Pierre-Edouard; Verdun, Francis R.; Bochud, François O.
2006-03-01
In this work we investigated the digital synthesis of images which mimic real textures observed in mammograms. Such images could be produced in an unlimited number with tunable statistical properties in order to study human performance and model observer performance in perception experiments. We used the previously developed clustered lumpy background (CLB) technique and optimized its parameters with a genetic algorithm (GA). In order to maximize the realism of the textures, we combined the GA objective approach with psychophysical experiments involving the judgments of radiologists. Thirty-six statistical features were computed and averaged, over 1000 real mammograms regions of interest. The same features were measured for the synthetic textures, and the Mahalanobis distance was used to quantify the similarity of the features between the real and synthetic textures. The similarity, as measured by the Mahalanobis distance, was used as GA fitness function for evolving the free CLB parameters. In the psychophysical approach, experienced radiologists were asked to qualify the realism of synthetic images by considering typical structures that are expected to be found on real mammograms: glandular and fatty areas, and fiber crossings. Results show that CLB images found via optimization with GA are significantly closer to real mammograms than previously published images. Moreover, the psychophysical experiments confirm that all the above mentioned structures are reproduced well on the generated images. This means that we can generate an arbitrary large database of textures mimicking mammograms with traceable statistical properties.
NASA Astrophysics Data System (ADS)
Irvine, John M.; Ghadar, Nastaran; Duncan, Steve; Floyd, David; O'Dowd, David; Lin, Kristie; Chang, Tom
2017-03-01
Quantitative biomarkers for assessing the presence, severity, and progression of age-related macular degeneration (AMD) would benefit research, diagnosis, and treatment. This paper explores development of quantitative biomarkers derived from OCT imagery of the retina. OCT images for approximately 75 patients with Wet AMD, Dry AMD, and no AMD (healthy eyes) were analyzed to identify image features indicative of the patients' conditions. OCT image features provide a statistical characterization of the retina. Healthy eyes exhibit a layered structure, whereas chaotic patterns indicate the deterioration associated with AMD. Our approach uses wavelet and Frangi filtering, combined with statistical features that do not rely on image segmentation, to assess patient conditions. Classification analysis indicates clear separability of Wet AMD from other conditions, including Dry AMD and healthy retinas. The probability of correct classification of was 95.7%, as determined from cross validation. Similar classification analysis predicts the response of Wet AMD patients to treatment, as measured by the Best Corrected Visual Acuity (BCVA). A statistical model predicts BCVA from the imagery features with R2 = 0.846. Initial analysis of OCT imagery indicates that imagery-derived features can provide useful biomarkers for characterization and quantification of AMD: Accurate assessment of Wet AMD compared to other conditions; image-based prediction of outcome for Wet AMD treatment; and features derived from the OCT imagery accurately predict BCVA; unlike many methods in the literature, our techniques do not rely on segmentation of the OCT image. Next steps include larger scale testing and validation.
Flow and transport in digitized images of Berea sandstone: ergodicity, stationarity and upscaling
NASA Astrophysics Data System (ADS)
Puyguiraud, A.; Dentz, M.; Gouze, P.
2017-12-01
We perform Stokes flow simulations on digitized images of a Berea sandstone sample obtained through micro-tomography imaging and segmentation processes. We obtain accurate information on the transport using a streamline reconstruction algorithm which uses the velocity field obtained from the flow simulation as input data. This technique is based on the method proposed by Pollock (Groundwater, 1988) but employs a quadratic interpolation near the rock mesh cells of the domain similarly to Mostaghimi et al. (SPE, 2012). This allows an accurate resolution of the velocity field near the solid interface which plays an important role on the transport characteristics, such as the probability density of first arrival times and the growth of the mean squared displacement, among others, which exhibit non-Fickian behavior. We analyze Lagrangian and Eulerian velocity statistics and their relation, and then focus on the ergodicity and the stationarity properties of the transport.We analyze the temporal evolution of Lagrangian velocity statistics for different injection conditions, and findd quick convergence to a limiting velocity distribution, indicating the transport to be near-stationary. The equivalence between velocity samplings within and across streamlines, as well as the independency of the statistics on the number of sampled streamlines, lead as to conclude that the transport may be modeled as ergodic.These characteristics then allow us to upscale the 3-dimensional simulations using a 1-dimensional Continuous Time Random Walk model. This model, parametrized by the velocity results and the characteristic correlation length obtained from the above mentioned simulations, is able to efficiently reproduce the results and to predict larger scale behaviors.
Statistical shape modeling based renal volume measurement using tracked ultrasound
NASA Astrophysics Data System (ADS)
Pai Raikar, Vipul; Kwartowitz, David M.
2017-03-01
Autosomal dominant polycystic kidney disease (ADPKD) is the fourth most common cause of kidney transplant worldwide accounting for 7-10% of all cases. Although ADPKD usually progresses over many decades, accurate risk prediction is an important task.1 Identifying patients with progressive disease is vital to providing new treatments being developed and enable them to enter clinical trials for new therapy. Among other factors, total kidney volume (TKV) is a major biomarker predicting the progression of ADPKD. Consortium for Radiologic Imaging Studies in Polycystic Kidney Disease (CRISP)2 have shown that TKV is an early, and accurate measure of cystic burden and likely growth rate. It is strongly associated with loss of renal function.3 While ultrasound (US) has proven as an excellent tool for diagnosing the disease; monitoring short-term changes using ultrasound has been shown to not be accurate. This is attributed to high operator variability and reproducibility as compared to tomographic modalities such as CT and MR (Gold standard). Ultrasound has emerged as one of the standout modality for intra-procedural imaging and with methods for spatial localization has afforded us the ability to track 2D ultrasound in physical space which it is being used. In addition to this, the vast amount of recorded tomographic data can be used to generate statistical shape models that allow us to extract clinical value from archived image sets. In this work, we aim at improving the prognostic value of US in managing ADPKD by assessing the accuracy of using statistical shape model augmented US data, to predict TKV, with the end goal of monitoring short-term changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, R; Aguilera, T; Shultz, D
2014-06-15
Purpose: This study aims to develop predictive models of patient outcome by extracting advanced imaging features (i.e., Radiomics) from FDG-PET images. Methods: We acquired pre-treatment PET scans for 51 stage I NSCLC patients treated with SABR. We calculated 139 quantitative features from each patient PET image, including 5 morphological features, 8 statistical features, 27 texture features, and 100 features from the intensity-volume histogram. Based on the imaging features, we aim to distinguish between 2 risk groups of patients: those with regional failure or distant metastasis versus those without. We investigated 3 pattern classification algorithms: linear discriminant analysis (LDA), naive Bayesmore » (NB), and logistic regression (LR). To avoid the curse of dimensionality, we performed feature selection by first removing redundant features and then applying sequential forward selection using the wrapper approach. To evaluate the predictive performance, we performed 10-fold cross validation with 1000 random splits of the data and calculated the area under the ROC curve (AUC). Results: Feature selection identified 2 texture features (homogeneity and/or wavelet decompositions) for NB and LR, while for LDA SUVmax and one texture feature (correlation) were identified. All 3 classifiers achieved statistically significant improvements over conventional PET imaging metrics such as tumor volume (AUC = 0.668) and SUVmax (AUC = 0.737). Overall, NB achieved the best predictive performance (AUC = 0.806). This also compares favorably with MTV using the best threshold at an SUV of 11.6 (AUC = 0.746). At a sensitivity of 80%, NB achieved 69% specificity, while SUVmax and tumor volume only had 36% and 47% specificity. Conclusion: Through a systematic analysis of advanced PET imaging features, we are able to build models with improved predictive value over conventional imaging metrics. If validated in a large independent cohort, the proposed techniques could potentially aid in identifying patients who might benefit from adjuvant therapy.« less
Imaging regional renal function parameters using radionuclide tracers
NASA Astrophysics Data System (ADS)
Qiao, Yi
A compartmental model is given for evaluating kidney function accurately and noninvasively. This model is cast into a parallel multi-compartment structure and each pixel region (picture element) of kidneys is considered as a single kidney compartment. The loss of radionuclide tracers from the blood to the kidney and from the kidney to the bladder are modelled in great detail. Both the uptake function and the excretion function of the kidneys can be evaluated pixel by pixel, and regional diagnostic information on renal function is obtained. Gamma Camera image data are required by this model and a screening test based renal function measurement is provided. The regional blood background is subtracted from the kidney region of interest (ROI) and the kidney regional rate constants are estimated analytically using the Kuhn-Pucker multiplier method in convex programming by considering the input/output behavior of the kidney compartments. The detailed physiological model of the peripheral compartments of the system, which is not available for most radionuclide tracers, is not required in the determination of the kidney regional rate constants and the regional blood background factors within the kidney ROI. Moreover, the statistical significance of measurements is considered to assure the improved statistical properties of the estimated kidney rate constants. The relations between various renal function parameters and the kidney rate constants are established. Multiple renal function measurements can be found from the renal compartmental model. The blood radioactivity curve and the regional (or total) radiorenogram determining the regional (or total) summed behavior of the kidneys are obtained analytically with the consideration of the statistical significance of measurements using convex programming methods for a single peripheral compartment system. In addition, a new technique for the determination of 'initial conditions' in both the blood compartment and the kidney compartment is presented. The blood curve and the radiorenogram are analyzed in great detail and a physiological analysis from the radiorenogram is given. Applications of Kuhn-Tucker multiplier methods are illustrated for the renal compartmental model in the field of nuclear medicine. Conventional kinetic data analysis methods, the maximum likehood method, and the weighted integration method are investigated and used for comparisons. Moreover, the effect of the blood background subtraction is shown by using the gamma camera images in man. Several functional images are calculated and the functional imaging technique is applied for evaluating renal function in man quantitatively and visually and compared with comments from a physician.
2013-01-01
Background Metabolic alteration is one of the hallmarks of carcinogenesis. We aimed to identify certain metabolic biomarkers for the early detection of pancreatic cancer (PC) using the transgenic PTEN-null mouse model. Pancreas-specific deletion of PTEN in mouse caused progressive premalignant lesions such as highly proliferative ductal metaplasia. We imaged the mitochondrial redox state of the pancreases of the transgenic mice approximately eight months old using the redox scanner, i.e., the nicotinamide adenine dinucleotide/oxidized flavoproteins (NADH/Fp) fluorescence imager at low temperature. Two different approaches, the global averaging of the redox indices without considering tissue heterogeneity along tissue depth and the univariate analysis of multi-section data using tissue depth as a covariate were adopted for the statistical analysis of the multi-section imaging data. The standard deviations of the redox indices and the histogram analysis with Gaussian fit were used to determine the tissue heterogeneity. Results All methods show consistently that the PTEN deficient pancreases (Pdx1-Cre;PTENlox/lox) were significantly more heterogeneous in their mitochondrial redox state compared to the controls (PTENlox/lox). Statistical analysis taking into account the variations of the redox state with tissue depth further shows that PTEN deletion significantly shifted the pancreatic tissue to an overall more oxidized state. Oxidization of the PTEN-null group was not seen when the imaging data were analyzed by global averaging without considering the variation of the redox indices along tissue depth, indicating the importance of taking tissue heterogeneity into account for the statistical analysis of the multi-section imaging data. Conclusions This study reveals a possible link between the mitochondrial redox state alteration of the pancreas and its malignant transformation and may be further developed for establishing potential metabolic biomarkers for the early diagnosis of pancreatic cancer. PMID:24252270
Statistical Model of Dynamic Markers of the Alzheimer's Pathological Cascade.
Balsis, Steve; Geraci, Lisa; Benge, Jared; Lowe, Deborah A; Choudhury, Tabina K; Tirso, Robert; Doody, Rachelle S
2018-05-05
Alzheimer's disease (AD) is a progressive disease reflected in markers across assessment modalities, including neuroimaging, cognitive testing, and evaluation of adaptive function. Identifying a single continuum of decline across assessment modalities in a single sample is statistically challenging because of the multivariate nature of the data. To address this challenge, we implemented advanced statistical analyses designed specifically to model complex data across a single continuum. We analyzed data from the Alzheimer's Disease Neuroimaging Initiative (ADNI; N = 1,056), focusing on indicators from the assessments of magnetic resonance imaging (MRI) volume, fluorodeoxyglucose positron emission tomography (FDG-PET) metabolic activity, cognitive performance, and adaptive function. Item response theory was used to identify the continuum of decline. Then, through a process of statistical scaling, indicators across all modalities were linked to that continuum and analyzed. Findings revealed that measures of MRI volume, FDG-PET metabolic activity, and adaptive function added measurement precision beyond that provided by cognitive measures, particularly in the relatively mild range of disease severity. More specifically, MRI volume, and FDG-PET metabolic activity become compromised in the very mild range of severity, followed by cognitive performance and finally adaptive function. Our statistically derived models of the AD pathological cascade are consistent with existing theoretical models.
Collaborative classification of hyperspectral and visible images with convolutional neural network
NASA Astrophysics Data System (ADS)
Zhang, Mengmeng; Li, Wei; Du, Qian
2017-10-01
Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.
Learning-based stochastic object models for use in optimizing imaging systems
NASA Astrophysics Data System (ADS)
Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua
2017-03-01
It is widely known that the optimization of imaging systems based on objective, or task-based, measures of image quality via computer-simulation requires use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in anatomy within a specified ensemble of patients remains a challenging task. Because they are established by use of image data corresponding a single patient, previously reported numerical anatomical models lack of the ability to accurately model inter- patient variations in anatomy. In certain applications, however, databases of high-quality volumetric images are available that can facilitate this task. In this work, a novel and tractable methodology for learning a SOM from a set of volumetric training images is developed. The proposed method is based upon geometric attribute distribution (GAD) models, which characterize the inter-structural centroid variations and the intra-structural shape variations of each individual anatomical structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations learned from training data. By use of the GAD models, random organ shapes and positions can be generated and integrated to form an anatomical phantom. The randomness in organ shape and position will reflect the variability of anatomy present in the training data. To demonstrate the methodology, a SOM corresponding to the pelvis of an adult male was computed and a corresponding ensemble of phantoms was created. Additionally, computer-simulated X-ray projection images corresponding to the phantoms were computed, from which tomographic images were reconstructed.
Ben Younes, Lassad; Nakajima, Yoshikazu; Saito, Toki
2014-03-01
Femur segmentation is well established and widely used in computer-assisted orthopedic surgery. However, most of the robust segmentation methods such as statistical shape models (SSM) require human intervention to provide an initial position for the SSM. In this paper, we propose to overcome this problem and provide a fully automatic femur segmentation method for CT images based on primitive shape recognition and SSM. Femur segmentation in CT scans was performed using primitive shape recognition based on a robust algorithm such as the Hough transform and RANdom SAmple Consensus. The proposed method is divided into 3 steps: (1) detection of the femoral head as sphere and the femoral shaft as cylinder in the SSM and the CT images, (2) rigid registration between primitives of SSM and CT image to initialize the SSM into the CT image, and (3) fitting of the SSM to the CT image edge using an affine transformation followed by a nonlinear fitting. The automated method provided good results even with a high number of outliers. The difference of segmentation error between the proposed automatic initialization method and a manual initialization method is less than 1 mm. The proposed method detects primitive shape position to initialize the SSM into the target image. Based on primitive shapes, this method overcomes the problem of inter-patient variability. Moreover, the results demonstrate that our method of primitive shape recognition can be used for 3D SSM initialization to achieve fully automatic segmentation of the femur.
Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle
NASA Astrophysics Data System (ADS)
Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon
2018-03-01
Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.
Lamare, F; Le Maitre, A; Dawood, M; Schäfers, K P; Fernandez, P; Rimoldi, O E; Visvikis, D
2014-07-01
Cardiac imaging suffers from both respiratory and cardiac motion. One of the proposed solutions involves double gated acquisitions. Although such an approach may lead to both respiratory and cardiac motion compensation there are issues associated with (a) the combination of data from cardiac and respiratory motion bins, and (b) poor statistical quality images as a result of using only part of the acquired data. The main objective of this work was to evaluate different schemes of combining binned data in order to identify the best strategy to reconstruct motion free cardiac images from dual gated positron emission tomography (PET) acquisitions. A digital phantom study as well as seven human studies were used in this evaluation. PET data were acquired in list mode (LM). A real-time position management system and an electrocardiogram device were used to provide the respiratory and cardiac motion triggers registered within the LM file. Acquired data were subsequently binned considering four and six cardiac gates, or the diastole only in combination with eight respiratory amplitude gates. PET images were corrected for attenuation, but no randoms nor scatter corrections were included. Reconstructed images from each of the bins considered above were subsequently used in combination with an affine or an elastic registration algorithm to derive transformation parameters allowing the combination of all acquired data in a particular position in the cardiac and respiratory cycles. Images were assessed in terms of signal-to-noise ratio (SNR), contrast, image profile, coefficient-of-variation (COV), and relative difference of the recovered activity concentration. Regardless of the considered motion compensation strategy, the nonrigid motion model performed better than the affine model, leading to higher SNR and contrast combined with a lower COV. Nevertheless, when compensating for respiration only, no statistically significant differences were observed in the performance of the two motion models considered. Superior image SNR and contrast were seen using the affine respiratory motion model in combination with the diastole cardiac bin in comparison to the use of the whole cardiac cycle. In contrast, when simultaneously correcting for cardiac beating and respiration, the elastic respiratory motion model outperformed the affine model. In this context, four cardiac bins associated with eight respiratory amplitude bins seemed to be adequate. Considering the compensation of respiratory motion effects only, both affine and elastic based approaches led to an accurate resizing and positioning of the myocardium. The use of the diastolic phase combined with an affine model based respiratory motion correction may therefore be a simple approach leading to significant quality improvements in cardiac PET imaging. However, the best performance was obtained with the combined correction for both cardiac and respiratory movements considering all the dual-gated bins independently through the use of an elastic model based motion compensation.
Russell, Richard A; Adams, Niall M; Stephens, David A; Batty, Elizabeth; Jensen, Kirsten; Freemont, Paul S
2009-04-22
Considerable advances in microscopy, biophysics, and cell biology have provided a wealth of imaging data describing the functional organization of the cell nucleus. Until recently, cell nuclear architecture has largely been assessed by subjective visual inspection of fluorescently labeled components imaged by the optical microscope. This approach is inadequate to fully quantify spatial associations, especially when the patterns are indistinct, irregular, or highly punctate. Accurate image processing techniques as well as statistical and computational tools are thus necessary to interpret this data if meaningful spatial-function relationships are to be established. Here, we have developed a thresholding algorithm, stable count thresholding (SCT), to segment nuclear compartments in confocal laser scanning microscopy image stacks to facilitate objective and quantitative analysis of the three-dimensional organization of these objects using formal statistical methods. We validate the efficacy and performance of the SCT algorithm using real images of immunofluorescently stained nuclear compartments and fluorescent beads as well as simulated images. In all three cases, the SCT algorithm delivers a segmentation that is far better than standard thresholding methods, and more importantly, is comparable to manual thresholding results. By applying the SCT algorithm and statistical analysis, we quantify the spatial configuration of promyelocytic leukemia nuclear bodies with respect to irregular-shaped SC35 domains. We show that the compartments are closer than expected under a null model for their spatial point distribution, and furthermore that their spatial association varies according to cell state. The methods reported are general and can readily be applied to quantify the spatial interactions of other nuclear compartments.
Russell, Richard A.; Adams, Niall M.; Stephens, David A.; Batty, Elizabeth; Jensen, Kirsten; Freemont, Paul S.
2009-01-01
Abstract Considerable advances in microscopy, biophysics, and cell biology have provided a wealth of imaging data describing the functional organization of the cell nucleus. Until recently, cell nuclear architecture has largely been assessed by subjective visual inspection of fluorescently labeled components imaged by the optical microscope. This approach is inadequate to fully quantify spatial associations, especially when the patterns are indistinct, irregular, or highly punctate. Accurate image processing techniques as well as statistical and computational tools are thus necessary to interpret this data if meaningful spatial-function relationships are to be established. Here, we have developed a thresholding algorithm, stable count thresholding (SCT), to segment nuclear compartments in confocal laser scanning microscopy image stacks to facilitate objective and quantitative analysis of the three-dimensional organization of these objects using formal statistical methods. We validate the efficacy and performance of the SCT algorithm using real images of immunofluorescently stained nuclear compartments and fluorescent beads as well as simulated images. In all three cases, the SCT algorithm delivers a segmentation that is far better than standard thresholding methods, and more importantly, is comparable to manual thresholding results. By applying the SCT algorithm and statistical analysis, we quantify the spatial configuration of promyelocytic leukemia nuclear bodies with respect to irregular-shaped SC35 domains. We show that the compartments are closer than expected under a null model for their spatial point distribution, and furthermore that their spatial association varies according to cell state. The methods reported are general and can readily be applied to quantify the spatial interactions of other nuclear compartments. PMID:19383481
Model-based approach to the detection and classification of mines in sidescan sonar.
Reed, Scott; Petillot, Yvan; Bell, Judith
2004-01-10
This paper presents a model-based approach to mine detection and classification by use of sidescan sonar. Advances in autonomous underwater vehicle technology have increased the interest in automatic target recognition systems in an effort to automate a process that is currently carried out by a human operator. Current automated systems generally require training and thus produce poor results when the test data set is different from the training set. This has led to research into unsupervised systems, which are able to cope with the large variability in conditions and terrains seen in sidescan imagery. The system presented in this paper first detects possible minelike objects using a Markov random field model, which operates well on noisy images, such as sidescan, and allows a priori information to be included through the use of priors. The highlight and shadow regions of the object are then extracted with a cooperating statistical snake, which assumes these regions are statistically separate from the background. Finally, a classification decision is made using Dempster-Shafer theory, where the extracted features are compared with synthetic realizations generated with a sidescan sonar simulator model. Results for the entire process are shown on real sidescan sonar data. Similarities between the sidescan sonar and synthetic aperture radar (SAR) imaging processes ensure that the approach outlined here could be made applied to SAR image analysis.
Onofrey, John A.; Staib, Lawrence H.; Papademetris, Xenophon
2015-01-01
This paper describes a framework for learning a statistical model of non-rigid deformations induced by interventional procedures. We make use of this learned model to perform constrained non-rigid registration of pre-procedural and post-procedural imaging. We demonstrate results applying this framework to non-rigidly register post-surgical computed tomography (CT) brain images to pre-surgical magnetic resonance images (MRIs) of epilepsy patients who had intra-cranial electroencephalography electrodes surgically implanted. Deformations caused by this surgical procedure, imaging artifacts caused by the electrodes, and the use of multi-modal imaging data make non-rigid registration challenging. Our results show that the use of our proposed framework to constrain the non-rigid registration process results in significantly improved and more robust registration performance compared to using standard rigid and non-rigid registration methods. PMID:26900569
Objective determination of image end-members in spectral mixture analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Tompkins, Stefanie; Mustard, John F.; Pieters, Carle M.; Forsyth, Donald W.
1993-01-01
Spectral mixture analysis has been shown to be a powerful, multifaceted tool for analysis of multi- and hyper-spectral data. Applications of AVIRIS data have ranged from mapping soils and bedrock to ecosystem studies. During the first phase of the approach, a set of end-members are selected from an image cube (image end-members) that best account for its spectral variance within a constrained, linear least squares mixing model. These image end-members are usually selected using a priori knowledge and successive trial and error solutions to refine the total number and physical location of the end-members. However, in many situations a more objective method of determining these essential components is desired. We approach the problem of image end-member determination objectively by using the inherent variance of the data. Unlike purely statistical methods such as factor analysis, this approach derives solutions that conform to a physically realistic model.
Sparse intervertebral fence composition for 3D cervical vertebra segmentation
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian
2018-06-01
Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.
NASA Astrophysics Data System (ADS)
Dou, Hsiang-Tai
The uncertainties due to respiratory motion present significant challenges to accurate characterization of cancerous tissues both in terms of imaging and treatment. Currently available clinical lung imaging techniques are subject to inferior image quality and incorrect motion estimation, with consequences that can systematically impact the downstream treatment delivery and outcome. The main objective of this thesis is the development of the techniques of fast helical computed tomography (CT) imaging and deformable image registration for the radiotherapy applications in accurate breathing motion modeling, lung tissue density modeling and ventilation imaging. Fast helical CT scanning was performed on 64-slice CT scanner using the shortest available gantry rotation time and largest pitch value such that scanning of the thorax region amounts to just two seconds, which is less than typical breathing cycle in humans. The scanning was conducted under free breathing condition. Any portion of the lung anatomy undergoing such scanning protocol would be irradiated for only a quarter second, effectively removing any motion induced image artifacts. The resulting CT data were pristine volumetric images that record the lung tissue position and density in a fraction of the breathing cycle. Following our developed protocol, multiple fast helical CT scans were acquired to sample the tissue positions in different breathing states. To measure the tissue displacement, deformable image registration was performed that registers the non-reference images to the reference one. In modeling breathing motion, external breathing surrogate signal was recorded synchronously with the CT image slices. This allowed for the tissue-specific displacement to be modeled as parametrization of the recorded breathing signal using the 5D lung motion model. To assess the accuracy of the motion model in describing tissue position change, the model was used to simulate the original high-pitch helical CT scan geometries, employed as ground truth data. Image similarity between the simulated and ground truth scans was evaluated. The model validation experiments were conducted in a patient cohort of seventeen patients to assess the model robustness and inter-patient variation. The model error averaged over multiple tracked positions from several breathing cycles was found to be on the order of one millimeter. In modeling the density change under free breathing condition, the determinant of Jacobian matrix from the registration-derived deformation vector field yielded volume change information of the lung tissues. Correlation of the Jacobian values to the corresponding voxel Housfield units (HU) reveals that the density variation for the majority of lung tissues can be very well described by mass conservation relationship. Different tissue types were identified and separately modeled. Large trials of validation experiments were performed. The averaged deviation between the modeled and the reference lung density was 30 HU, which was estimated to be the background CT noise level. In characterizing the lung ventilation function, a novel method was developed to determine the extent of lung tissue volume change. Information on volume change was derived from the deformable image registration of the fast helical CT images in terms of Jacobian values with respect to a reference image. Assuming the multiple volume change measurements are independently and identically distributed, statistical formulation was derived to model ventilation distribution of each lung voxels and empirical minimum and maximum probability distribution of the Jacobian values was computed. Ventilation characteristic was evaluated as the difference of the expectation value from these extremal distributions. The resulting ventilation map was compared with an independently obtained ventilation image derived directly from the lung intensities and good correlation was found using statistical test. In addition, dynamic ventilation characterization was investigated by estimating the voxel-specific ventilation distribution. Ventilation maps were generated at different percentile levels using the tissue volume expansion metrics.
Leontidis, Georgios
2017-11-01
Human retina is a diverse and important tissue, vastly studied for various retinal and other diseases. Diabetic retinopathy (DR), a leading cause of blindness, is one of them. This work proposes a novel and complete framework for the accurate and robust extraction and analysis of a series of retinal vascular geometric features. It focuses on studying the registered bifurcations in successive years of progression from diabetes (no DR) to DR, in order to identify the vascular alterations. Retinal fundus images are utilised, and multiple experimental designs are employed. The framework includes various steps, such as image registration and segmentation, extraction of features, statistical analysis and classification models. Linear mixed models are utilised for making the statistical inferences, alongside the elastic-net logistic regression, boruta algorithm, and regularised random forests for the feature selection and classification phases, in order to evaluate the discriminative potential of the investigated features and also build classification models. A number of geometric features, such as the central retinal artery and vein equivalents, are found to differ significantly across the experiments and also have good discriminative potential. The classification systems yield promising results with the area under the curve values ranging from 0.821 to 0.968, across the four different investigated combinations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhang, Miaomiao; Wells, William M; Golland, Polina
2016-10-01
Using image-based descriptors to investigate clinical hypotheses and therapeutic implications is challenging due to the notorious "curse of dimensionality" coupled with a small sample size. In this paper, we present a low-dimensional analysis of anatomical shape variability in the space of diffeomorphisms and demonstrate its benefits for clinical studies. To combat the high dimensionality of the deformation descriptors, we develop a probabilistic model of principal geodesic analysis in a bandlimited low-dimensional space that still captures the underlying variability of image data. We demonstrate the performance of our model on a set of 3D brain MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our model yields a more compact representation of group variation at substantially lower computational cost than models based on the high-dimensional state-of-the-art approaches such as tangent space PCA (TPCA) and probabilistic principal geodesic analysis (PPGA).
Comprehensive Computational Pathological Image Analysis Predicts Lung Cancer Prognosis.
Luo, Xin; Zang, Xiao; Yang, Lin; Huang, Junzhou; Liang, Faming; Rodriguez-Canales, Jaime; Wistuba, Ignacio I; Gazdar, Adi; Xie, Yang; Xiao, Guanghua
2017-03-01
Pathological examination of histopathological slides is a routine clinical procedure for lung cancer diagnosis and prognosis. Although the classification of lung cancer has been updated to become more specific, only a small subset of the total morphological features are taken into consideration. The vast majority of the detailed morphological features of tumor tissues, particularly tumor cells' surrounding microenvironment, are not fully analyzed. The heterogeneity of tumor cells and close interactions between tumor cells and their microenvironments are closely related to tumor development and progression. The goal of this study is to develop morphological feature-based prediction models for the prognosis of patients with lung cancer. We developed objective and quantitative computational approaches to analyze the morphological features of pathological images for patients with NSCLC. Tissue pathological images were analyzed for 523 patients with adenocarcinoma (ADC) and 511 patients with squamous cell carcinoma (SCC) from The Cancer Genome Atlas lung cancer cohorts. The features extracted from the pathological images were used to develop statistical models that predict patients' survival outcomes in ADC and SCC, respectively. We extracted 943 morphological features from pathological images of hematoxylin and eosin-stained tissue and identified morphological features that are significantly associated with prognosis in ADC and SCC, respectively. Statistical models based on these extracted features stratified NSCLC patients into high-risk and low-risk groups. The models were developed from training sets and validated in independent testing sets: a predicted high-risk group versus a predicted low-risk group (for patients with ADC: hazard ratio = 2.34, 95% confidence interval: 1.12-4.91, p = 0.024; for patients with SCC: hazard ratio = 2.22, 95% confidence interval: 1.15-4.27, p = 0.017) after adjustment for age, sex, smoking status, and pathologic tumor stage. The results suggest that the quantitative morphological features of tumor pathological images predict prognosis in patients with lung cancer. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Treu, T.; Brammer, G.; Diego, J. M.; Grillo, C.; Kelly, P. L.; Oguri, M.; Rodney, S. A.; Rosati, P.; Sharon, K.; Zitrin, A.; Balestra, I.; Bradač, M.; Broadhurst, T.; Caminha, G. B.; Halkola, A.; Hoag, A.; Ishigaki, M.; Johnson, T. L.; Karman, W.; Kawamata, R.; Mercurio, A.; Schmidt, K. B.; Strolger, L.-G.; Suyu, S. H.; Filippenko, A. V.; Foley, R. J.; Jha, S. W.; Patel, B.
2016-01-01
Supernova “Refsdal,” multiply imaged by cluster MACS1149.5+2223, represents a rare opportunity to make a true blind test of model predictions in extragalactic astronomy, on a timescale that is short compared to a human lifetime. In order to take advantage of this event, we produced seven gravitational lens models with five independent methods, based on Hubble Space Telescope (HST) Hubble Frontier Field images, along with extensive spectroscopic follow-up observations by HST, the Very Large and the Keck Telescopes. We compare the model predictions and show that they agree reasonably well with the measured time delays and magnification ratios between the known images, even though these quantities were not used as input. This agreement is encouraging, considering that the models only provide statistical uncertainties, and do not include additional sources of uncertainties such as structure along the line of sight, cosmology, and the mass sheet degeneracy. We then present the model predictions for the other appearances of supernova “Refsdal.” A future image will reach its peak in the first half of 2016, while another image appeared between 1994 and 2004. The past image would have been too faint to be detected in existing archival images. The future image should be approximately one-third as bright as the brightest known image (I.e., {H}{{AB}}≈ 25.7 mag at peak and {H}{{AB}}≈ 26.7 mag six months before peak), and thus detectable in single-orbit HST images. We will find out soon whether our predictions are correct.
Reconstructing liver shape and position from MR image slices using an active shape model
NASA Astrophysics Data System (ADS)
Fenchel, Matthias; Thesen, Stefan; Schilling, Andreas
2008-03-01
We present an algorithm for fully automatic reconstruction of 3D position, orientation and shape of the human liver from a sparsely covering set of n 2D MR slice images. Reconstructing the shape of an organ from slice images can be used for scan planning, for surgical planning or other purposes where 3D anatomical knowledge has to be inferred from sparse slices. The algorithm is based on adapting an active shape model of the liver surface to a given set of slice images. The active shape model is created from a training set of liver segmentations from a group of volunteers. The training set is set up with semi-manual segmentations of T1-weighted volumetric MR images. Searching for the optimal shape model that best fits to the image data is done by maximizing a similarity measure based on local appearance at the surface. Two different algorithms for the active shape model search are proposed and compared: both algorithms seek to maximize the a-posteriori probability of the grey level appearance around the surface while constraining the surface to the space of valid shapes. The first algorithm works by using grey value profile statistics in normal direction. The second algorithm uses average and variance images to calculate the local surface appearance on the fly. Both algorithms are validated by fitting the active shape model to abdominal 2D slice images and comparing the shapes, which have been reconstructed, to the manual segmentations and to the results of active shape model searches from 3D image data. The results turn out to be promising and competitive to active shape model segmentations from 3D data.
Comparison of 7.5-minute and 1-degree digital elevation models
NASA Technical Reports Server (NTRS)
Isaacson, Dennis L.; Ripple, William J.
1995-01-01
We compared two digital elevation models (DEM's) for the Echo Mountain SE quadrangle in the Cascade Mountains of Oregon. Comparisons were made between 7.5-minute (1:24,000-scale) and 1-degree (1:250,000-scale) images using the variables of elevation, slope aspect, and slope gradient. Both visual and statistical differences are presented.
Comparison of 7.5-minute and 1-degree digital elevation models
NASA Technical Reports Server (NTRS)
Isaacson, Dennis L.; Ripple, William J.
1990-01-01
Two digital elevation models are compared for the Echo Mountain SE quadrangle in the Cascade Mountains of Oregon. Comparisons were made between 7.5-minute (1:24,000-scale) and 1-degree (1:250,000-scale) images using the variables of elevation, slope aspect, and slope gradient. Both visual and statistical differences are presented.
Weng, Hsu-Huei; Noll, Kyle R; Johnson, Jason M; Prabhu, Sujit S; Tsai, Yuan-Hsiung; Chang, Sheng-Wei; Huang, Yen-Chu; Lee, Jiann-Der; Yang, Jen-Tsung; Yang, Cheng-Ta; Tsai, Ying-Huang; Yang, Chun-Yuh; Hazle, John D; Schomer, Donald F; Liu, Ho-Ling
2018-02-01
Purpose To compare functional magnetic resonance (MR) imaging for language mapping (hereafter, language functional MR imaging) with direct cortical stimulation (DCS) in patients with brain tumors and to assess factors associated with its accuracy. Materials and Methods PubMed/MEDLINE and related databases were searched for research articles published between January 2000 and September 2016. Findings were pooled by using bivariate random-effects and hierarchic summary receiver operating characteristic curve models. Meta-regression and subgroup analyses were performed to evaluate whether publication year, functional MR imaging paradigm, magnetic field strength, statistical threshold, and analysis software affected classification accuracy. Results Ten articles with a total of 214 patients were included in the analysis. On a per-patient basis, the pooled sensitivity and specificity of functional MR imaging was 44% (95% confidence interval [CI]: 14%, 78%) and 80% (95% CI: 54%, 93%), respectively. On a per-tag basis (ie, each DCS stimulation site or "tag" was considered a separate data point across all patients), the pooled sensitivity and specificity were 67% (95% CI: 51%, 80%) and 55% (95% CI: 25%, 82%), respectively. The per-tag analysis showed significantly higher sensitivity for studies with shorter functional MR imaging session times (P = .03) and relaxed statistical threshold (P = .05). Significantly higher specificity was found when expressive language task (P = .02), longer functional MR imaging session times (P < .01), visual presentation of stimuli (P = .04), and stringent statistical threshold (P = .01) were used. Conclusion Results of this study showed moderate accuracy of language functional MR imaging when compared with intraoperative DCS, and the included studies displayed significant methodologic heterogeneity. © RSNA, 2017 Online supplemental material is available for this article.
Models and Measurements for Multi-Layer Displays
2006-07-26
measurements. The observed statistical variation in the data results from laser speckle. No systematic uncertainties, which are expected to be less...difference metric. There are also some powerful statistical techniques to deal with this type of experiment, although it would take a lot of time to...hTraceWidth,vTraceWidth] in 10s of micrometers % Transitor sixe is vector : [hTransistorSize,vTransistorSize] in 10s of micrometers %Image is plotted if
Statistical characterization of portal images and noise from portal imaging systems.
González-López, Antonio; Morales-Sánchez, Juan; Verdú-Monedero, Rafael; Larrey-Ruiz, Jorge
2013-06-01
In this paper, we consider the statistical characteristics of the so-called portal images, which are acquired prior to the radiotherapy treatment, as well as the noise that present the portal imaging systems, in order to analyze whether the well-known noise and image features in other image modalities, such as natural image, can be found in the portal imaging modality. The study is carried out in the spatial image domain, in the Fourier domain, and finally in the wavelet domain. The probability density of the noise in the spatial image domain, the power spectral densities of the image and noise, and the marginal, joint, and conditional statistical distributions of the wavelet coefficients are estimated. Moreover, the statistical dependencies between noise and signal are investigated. The obtained results are compared with practical and useful references, like the characteristics of the natural image and the white noise. Finally, we discuss the implication of the results obtained in several noise reduction methods that operate in the wavelet domain.
NASA Astrophysics Data System (ADS)
Ott, Julien G.; Becce, Fabio; Monnin, Pascal; Schmidt, Sabine; Bochud, François O.; Verdun, Francis R.
2014-08-01
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Chiverton, John P; Ige, Olubisi; Barnett, Stephanie J; Parry, Tony
2017-11-01
This paper is concerned with the modeling and analysis of the orientation and distance between steel fibers in X-ray micro-tomography data. The advantage of combining both orientation and separation in a model is that it helps provide a detailed understanding of how the steel fibers are arranged, which is easy to compare. The developed models are designed to summarize the randomness of the orientation distribution of the steel fibers both locally and across an entire volume based on multiscale entropy. Theoretical modeling, simulation, and application to real imaging data are shown here. The theoretical modeling of multiscale entropy for orientation includes a proof showing the final form of the multiscale taken over a linear range of scales. A series of image processing operations are also included to overcome interslice connectivity issues to help derive the statistical descriptions of the orientation distributions of the steel fibers. The results demonstrate that multiscale entropy provides unique insights into both simulated and real imaging data of steel fiber reinforced concrete.
The transition of oncologic imaging from its “industrial era” to it is “information era” demands analytical methods that 1) extract information from this data that is clinically and biologically relevant; 2) integrate imaging, clinical, and genomic data via rigorous statistical and computational methodologies in order to derive models valuable for understanding cancer mechanisms, diagnosis, prognostic assessment, response evaluation, and personalized treatment management; 3) are available to the biomedical community for easy use and application, with the aim of understanding, diagnosing, an
NASA Astrophysics Data System (ADS)
Islam, Atiq; Iftekharuddin, Khan M.; Ogg, Robert J.; Laningham, Fred H.; Sivakumar, Bhuvaneswari
2008-03-01
In this paper, we characterize the tumor texture in pediatric brain magnetic resonance images (MRIs) and exploit these features for automatic segmentation of posterior fossa (PF) tumors. We focus on PF tumor because of the prevalence of such tumor in pediatric patients. Due to varying appearance in MRI, we propose to model the tumor texture with a multi-fractal process, such as a multi-fractional Brownian motion (mBm). In mBm, the time-varying Holder exponent provides flexibility in modeling irregular tumor texture. We develop a detailed mathematical framework for mBm in two-dimension and propose a novel algorithm to estimate the multi-fractal structure of tissue texture in brain MRI based on wavelet coefficients. This wavelet based multi-fractal feature along with MR image intensity and a regular fractal feature obtained using our existing piecewise-triangular-prism-surface-area (PTPSA) method, are fused in segmenting PF tumor and non-tumor regions in brain T1, T2, and FLAIR MR images respectively. We also demonstrate a non-patient-specific automated tumor prediction scheme based on these image features. We experimentally show the tumor discriminating power of our novel multi-fractal texture along with intensity and fractal features in automated tumor segmentation and statistical prediction. To evaluate the performance of our tumor prediction scheme, we obtain ROCs and demonstrate how sharply the curves reach the specificity of 1.0 sacrificing minimal sensitivity. Experimental results show the effectiveness of our proposed techniques in automatic detection of PF tumors in pediatric MRIs.
Figueiredo, Vânia F; Amorim, Juleimar S C; Pereira, Aline M; Ferreira, Paulo H; Pereira, Leani S M
2015-01-01
Low back pain (LBP) and urinary incontinence (UI) are highly prevalent among elderly individuals. In young adults, changes in trunk muscle recruitment, as assessed via ultrasound imaging, may be associated with lumbar spine stability. To assess the associations between LBP, UI, and the pattern of transversus abdominis (TrA), internal (IO), and external oblique (EO) muscle recruitment in the elderly as evaluated by ultrasound imaging. Fifty-four elderly individuals (mean age: 72±5.2 years) who complained of LBP and/or UI as assessed by the McGill Pain Questionnaire, Incontinence Questionnaire-Short Form, and ultrasound imaging were included in the study. The statistical analysis comprised a multiple linear regression model, and a p-value <0.05 was considered significant. The regression models for the TrA, IO, and EO muscle thickness levels explained 2.0% (R2=0.02; F=0.47; p=0.628), 10.6% (R2=0.106; F=3.03; p=0.057), and 10.1% (R2=0.101; F=2.70; p=0.077) of the variability, respectively. None of the regression models developed for the abdominal muscles exhibited statistical significance. A significant and negative association (p=0.018; β=-0.0343) was observed only between UI and IO recruitment. These results suggest that age-related factors may have interfered with the findings of the study, thus emphasizing the need to perform ultrasound imaging-based studies to measure abdominal muscle recruitment in the elderly.
Fontarensky, Mikael; Alfidja, Agaïcha; Perignon, Renan; Schoenig, Arnaud; Perrier, Christophe; Mulliez, Aurélien; Guy, Laurent; Boyer, Louis
2015-07-01
To evaluate the accuracy of reduced-dose abdominal computed tomographic (CT) imaging by using a new generation model-based iterative reconstruction (MBIR) to diagnose acute renal colic compared with a standard-dose abdominal CT with 50% adaptive statistical iterative reconstruction (ASIR). This institutional review board-approved prospective study included 118 patients with symptoms of acute renal colic who underwent the following two successive CT examinations: standard-dose ASIR 50% and reduced-dose MBIR. Two radiologists independently reviewed both CT examinations for presence or absence of renal calculi, differential diagnoses, and associated abnormalities. The imaging findings, radiation dose estimates, and image quality of the two CT reconstruction methods were compared. Concordance was evaluated by κ coefficient, and descriptive statistics and t test were used for statistical analysis. Intraobserver correlation was 100% for the diagnosis of renal calculi (κ = 1). Renal calculus (τ = 98.7%; κ = 0.97) and obstructive upper urinary tract disease (τ = 98.16%; κ = 0.95) were detected, and differential or alternative diagnosis was performed (τ = 98.87% κ = 0.95). MBIR allowed a dose reduction of 84% versus standard-dose ASIR 50% (mean volume CT dose index, 1.7 mGy ± 0.8 [standard deviation] vs 10.9 mGy ± 4.6; mean size-specific dose estimate, 2.2 mGy ± 0.7 vs 13.7 mGy ± 3.9; P < .001) without a conspicuous deterioration in image quality (reduced-dose MBIR vs ASIR 50% mean scores, 3.83 ± 0.49 vs 3.92 ± 0.27, respectively; P = .32) or increase in noise (reduced-dose MBIR vs ASIR 50% mean, respectively, 18.36 HU ± 2.53 vs 17.40 HU ± 3.42). Its main drawback remains the long time required for reconstruction (mean, 40 minutes). A reduced-dose protocol with MBIR allowed a dose reduction of 84% without increasing noise and without an conspicuous deterioration in image quality in patients suspected of having renal colic.
Ding, Liya; Martinez, Aleix M
2010-11-01
The appearance-based approach to face detection has seen great advances in the last several years. In this approach, we learn the image statistics describing the texture pattern (appearance) of the object class we want to detect, e.g., the face. However, this approach has had limited success in providing an accurate and detailed description of the internal facial features, i.e., eyes, brows, nose, and mouth. In general, this is due to the limited information carried by the learned statistical model. While the face template is relatively rich in texture, facial features (e.g., eyes, nose, and mouth) do not carry enough discriminative information to tell them apart from all possible background images. We resolve this problem by adding the context information of each facial feature in the design of the statistical model. In the proposed approach, the context information defines the image statistics most correlated with the surroundings of each facial component. This means that when we search for a face or facial feature, we look for those locations which most resemble the feature yet are most dissimilar to its context. This dissimilarity with the context features forces the detector to gravitate toward an accurate estimate of the position of the facial feature. Learning to discriminate between feature and context templates is difficult, however, because the context and the texture of the facial features vary widely under changing expression, pose, and illumination, and may even resemble one another. We address this problem with the use of subclass divisions. We derive two algorithms to automatically divide the training samples of each facial feature into a set of subclasses, each representing a distinct construction of the same facial component (e.g., closed versus open eyes) or its context (e.g., different hairstyles). The first algorithm is based on a discriminant analysis formulation. The second algorithm is an extension of the AdaBoost approach. We provide extensive experimental results using still images and video sequences for a total of 3,930 images. We show that the results are almost as good as those obtained with manual detection.
Improvement and extension of a radar forest backscattering model
NASA Technical Reports Server (NTRS)
Simonett, David S.; Wang, Yong
1989-01-01
Radar modeling of mangal forest stands, in the Sundarbans area of Southern Bangladesh, was developed. The modeling employs radar system parameters such as wavelength, polarization, and incidence angle, with forest data on tree height, spacing, biomass, species combinations, and water content (including slightly conductive water) both in leaves and trunks of the mangal. For Sundri and Gewa tropical mangal forests, five model components are proposed, which are required to explain the contributions of various forest species combinations in the attenuation and scattering of mangal vegetated nonflooded or flooded surfaces. Statistical data of simulated images (HH components only) were compared with those of SIR-B images both to refine the modeling procedures and to appropriately characterize the model output. The possibility of delineation of flooded or non-flooded boundaries is discussed.
Pu, Hongbin; Sun, Da-Wen; Ma, Ji; Cheng, Jun-Hu
2015-01-01
The potential of visible and near infrared hyperspectral imaging was investigated as a rapid and nondestructive technique for classifying fresh and frozen-thawed meats by integrating critical spectral and image features extracted from hyperspectral images in the region of 400-1000 nm. Six feature wavelengths (400, 446, 477, 516, 592 and 686 nm) were identified using uninformative variable elimination and successive projections algorithm. Image textural features of the principal component images from hyperspectral images were obtained using histogram statistics (HS), gray level co-occurrence matrix (GLCM) and gray level-gradient co-occurrence matrix (GLGCM). By these spectral and textural features, probabilistic neural network (PNN) models for classification of fresh and frozen-thawed pork meats were established. Compared with the models using the optimum wavelengths only, optimum wavelengths with HS image features, and optimum wavelengths with GLCM image features, the model integrating optimum wavelengths with GLGCM gave the highest classification rate of 93.14% and 90.91% for calibration and validation sets, respectively. Results indicated that the classification accuracy can be improved by combining spectral features with textural features and the fusion of critical spectral and textural features had better potential than single spectral extraction in classifying fresh and frozen-thawed pork meat. Copyright © 2014 Elsevier Ltd. All rights reserved.
Paddock, Michael T; Bailitz, John; Horowitz, Russ; Khishfe, Basem; Cosby, Karen; Sergel, Michelle J
2015-03-01
Pre-hospital focused assessment with sonography in trauma (FAST) has been effectively used to improve patient care in multiple mass casualty events throughout the world. Although requisite FAST knowledge may now be learned remotely by disaster response team members, traditional live instructor and model hands-on FAST skills training remains logistically challenging. The objective of this pilot study was to compare the effectiveness of a novel portable ultrasound (US) simulator with traditional FAST skills training for a deployed mixed provider disaster response team. We randomized participants into one of three training groups stratified by provider role: Group A. Traditional Skills Training, Group B. US Simulator Skills Training, and Group C. Traditional Skills Training Plus US Simulator Skills Training. After skills training, we measured participants' FAST image acquisition and interpretation skills using a standardized direct observation tool (SDOT) with healthy models and review of FAST patient images. Pre- and post-course US and FAST knowledge were also assessed using a previously validated multiple-choice evaluation. We used the ANOVA procedure to determine the statistical significance of differences between the means of each group's skills scores. Paired sample t-tests were used to determine the statistical significance of pre- and post-course mean knowledge scores within groups. We enrolled 36 participants, 12 randomized to each training group. Randomization resulted in similar distribution of participants between training groups with respect to provider role, age, sex, and prior US training. For the FAST SDOT image acquisition and interpretation mean skills scores, there was no statistically significant difference between training groups. For US and FAST mean knowledge scores, there was a statistically significant improvement between pre- and post-course scores within each group, but again there was not a statistically significant difference between training groups. This pilot study of a deployed mixed-provider disaster response team suggests that a novel portable US simulator may provide equivalent skills training in comparison to traditional live instructor and model training. Further studies with a larger sample size and other measures of short- and long-term clinical performance are warranted.
Sun, Jihang; Yu, Tong; Liu, Jinrong; Duan, Xiaomin; Hu, Di; Liu, Yong; Peng, Yun
2017-03-16
Model-based iterative reconstruction (MBIR) is a promising reconstruction method which could improve CT image quality with low radiation dose. The purpose of this study was to demonstrate the advantage of using MBIR for noise reduction and image quality improvement in low dose chest CT for children with necrotizing pneumonia, over the adaptive statistical iterative reconstruction (ASIR) and conventional filtered back-projection (FBP) technique. Twenty-six children with necrotizing pneumonia (aged 2 months to 11 years) who underwent standard of care low dose CT scans were included. Thinner-slice (0.625 mm) images were retrospectively reconstructed using MBIR, ASIR and conventional FBP techniques. Image noise and signal-to-noise ratio (SNR) for these thin-slice images were measured and statistically analyzed using ANOVA. Two radiologists independently analyzed the image quality for detecting necrotic lesions, and results were compared using a Friedman's test. Radiation dose for the overall patient population was 0.59 mSv. There was a significant improvement in the high-density and low-contrast resolution of the MBIR reconstruction resulting in more detection and better identification of necrotic lesions (38 lesions in 0.625 mm MBIR images vs. 29 lesions in 0.625 mm FBP images). The subjective display scores (mean ± standard deviation) for the detection of necrotic lesions were 5.0 ± 0.0, 2.8 ± 0.4 and 2.5 ± 0.5 with MBIR, ASIR and FBP reconstruction, respectively, and the respective objective image noise was 13.9 ± 4.0HU, 24.9 ± 6.6HU and 33.8 ± 8.7HU. The image noise decreased by 58.9 and 26.3% in MBIR images as compared to FBP and ASIR images. Additionally, the SNR of MBIR images was significantly higher than FBP images and ASIR images. The quality of chest CT images obtained by MBIR in children with necrotizing pneumonia was significantly improved by the MBIR technique as compared to the ASIR and FBP reconstruction, to provide a more confident and accurate diagnosis for necrotizing pneumonia.
3DNOW: Image-Based 3d Reconstruction and Modeling via Web
NASA Astrophysics Data System (ADS)
Tefera, Y.; Poiesi, F.; Morabito, D.; Remondino, F.; Nocerino, E.; Chippendale, P.
2018-05-01
This paper presents a web-based 3D imaging pipeline, namely 3Dnow, that can be used by anyone without the need of installing additional software other than a browser. By uploading a set of images through the web interface, 3Dnow can generate sparse and dense point clouds as well as mesh models. 3D reconstructed models can be downloaded with standard formats or previewed directly on the web browser through an embedded visualisation interface. In addition to reconstructing objects, 3Dnow offers the possibility to evaluate and georeference point clouds. Reconstruction statistics, such as minimum, maximum and average intersection angles, point redundancy and density can also be accessed. The paper describes all features available in the web service and provides an analysis of the computational performance using servers with different GPU configurations.
Are patient specific meshes required for EIT head imaging?
Jehl, Markus; Aristovich, Kirill; Faulkner, Mayo; Holder, David
2016-06-01
Head imaging with electrical impedance tomography (EIT) is usually done with time-differential measurements, to reduce time-invariant modelling errors. Previous research suggested that more accurate head models improved image quality, but no thorough analysis has been done on the required accuracy. We propose a novel pipeline for creation of precise head meshes from magnetic resonance imaging and computed tomography scans, which was applied to four different heads. Voltages were simulated on all four heads for perturbations of different magnitude, haemorrhage and ischaemia, in five different positions and for three levels of instrumentation noise. Statistical analysis showed that reconstructions on the correct mesh were on average 25% better than on the other meshes. However, the stroke detection rates were not improved. We conclude that a generic head mesh is sufficient for monitoring patients for secondary strokes following head trauma.
Nahid, Abdullah-Al; Mehrabi, Mohamad Ali; Kong, Yinan
2018-01-01
Breast Cancer is a serious threat and one of the largest causes of death of women throughout the world. The identification of cancer largely depends on digital biomedical photography analysis such as histopathological images by doctors and physicians. Analyzing histopathological images is a nontrivial task, and decisions from investigation of these kinds of images always require specialised knowledge. However, Computer Aided Diagnosis (CAD) techniques can help the doctor make more reliable decisions. The state-of-the-art Deep Neural Network (DNN) has been recently introduced for biomedical image analysis. Normally each image contains structural and statistical information. This paper classifies a set of biomedical breast cancer images (BreakHis dataset) using novel DNN techniques guided by structural and statistical information derived from the images. Specifically a Convolutional Neural Network (CNN), a Long-Short-Term-Memory (LSTM), and a combination of CNN and LSTM are proposed for breast cancer image classification. Softmax and Support Vector Machine (SVM) layers have been used for the decision-making stage after extracting features utilising the proposed novel DNN models. In this experiment the best Accuracy value of 91.00% is achieved on the 200x dataset, the best Precision value 96.00% is achieved on the 40x dataset, and the best F -Measure value is achieved on both the 40x and 100x datasets.
WAVELET-DOMAIN REGRESSION AND PREDICTIVE INFERENCE IN PSYCHIATRIC NEUROIMAGING
Reiss, Philip T.; Huo, Lan; Zhao, Yihong; Kelly, Clare; Ogden, R. Todd
2016-01-01
An increasingly important goal of psychiatry is the use of brain imaging data to develop predictive models. Here we present two contributions to statistical methodology for this purpose. First, we propose and compare a set of wavelet-domain procedures for fitting generalized linear models with scalar responses and image predictors: sparse variants of principal component regression and of partial least squares, and the elastic net. Second, we consider assessing the contribution of image predictors over and above available scalar predictors, in particular via permutation tests and an extension of the idea of confounding to the case of functional or image predictors. Using the proposed methods, we assess whether maps of a spontaneous brain activity measure, derived from functional magnetic resonance imaging, can meaningfully predict presence or absence of attention deficit/hyperactivity disorder (ADHD). Our results shed light on the role of confounding in the surprising outcome of the recent ADHD-200 Global Competition, which challenged researchers to develop algorithms for automated image-based diagnosis of the disorder. PMID:27330652
NASA Astrophysics Data System (ADS)
Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten
2014-03-01
Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.
NASA Astrophysics Data System (ADS)
Mohd. Rijal, Omar; Mohd. Noor, Norliza; Teng, Shee Lee
A statistical method of comparing two digital chest radiographs for Pulmonary Tuberculosis (PTB) patients has been proposed. After applying appropriate image registration procedures, a selected subset of each image is converted to an image histogram (or box plot). Comparing two chest X-ray images is equivalent to the direct comparison of the two corresponding histograms. From each histogram, eleven percentiles (of image intensity) are calculated. The number of percentiles that shift to the left (NLSP) when second image is compared to the first has been shown to be an indicator of patients` progress. In this study, the values of NLSP is to be compared with the actual diagnosis (Y) of several medical practitioners. A logistic regression model is used to study the relationship between NLSP and Y. This study showed that NLSP may be used as an alternative or second opinion for Y. The proposed regression model also show that important explanatory variables such as outcomes of sputum test (Z) and degree of image registration (W) may be omitted when estimating Y-values.
Sharpening method of satellite thermal image based on the geographical statistical model
NASA Astrophysics Data System (ADS)
Qi, Pengcheng; Hu, Shixiong; Zhang, Haijun; Guo, Guangmeng
2016-04-01
To improve the effectiveness of thermal sharpening in mountainous regions, paying more attention to the laws of land surface energy balance, a thermal sharpening method based on the geographical statistical model (GSM) is proposed. Explanatory variables were selected from the processes of land surface energy budget and thermal infrared electromagnetic radiation transmission, then high spatial resolution (57 m) raster layers were generated for these variables through spatially simulating or using other raster data as proxies. Based on this, the local adaptation statistical relationship between brightness temperature (BT) and the explanatory variables, i.e., the GSM, was built at 1026-m resolution using the method of multivariate adaptive regression splines. Finally, the GSM was applied to the high-resolution (57-m) explanatory variables; thus, the high-resolution (57-m) BT image was obtained. This method produced a sharpening result with low error and good visual effect. The method can avoid the blind choice of explanatory variables and remove the dependence on synchronous imagery at visible and near-infrared bands. The influences of the explanatory variable combination, sampling method, and the residual error correction on sharpening results were analyzed deliberately, and their influence mechanisms are reported herein.
Semiautomatic tumor segmentation with multimodal images in a conditional random field framework.
Hu, Yu-Chi; Grossberg, Michael; Mageras, Gikas
2016-04-01
Volumetric medical images of a single subject can be acquired using different imaging modalities, such as computed tomography, magnetic resonance imaging (MRI), and positron emission tomography. In this work, we present a semiautomatic segmentation algorithm that can leverage the synergies between different image modalities while integrating interactive human guidance. The algorithm provides a statistical segmentation framework partly automating the segmentation task while still maintaining critical human oversight. The statistical models presented are trained interactively using simple brush strokes to indicate tumor and nontumor tissues and using intermediate results within a patient's image study. To accomplish the segmentation, we construct the energy function in the conditional random field (CRF) framework. For each slice, the energy function is set using the estimated probabilities from both user brush stroke data and prior approved segmented slices within a patient study. The progressive segmentation is obtained using a graph-cut-based minimization. Although no similar semiautomated algorithm is currently available, we evaluated our method with an MRI data set from Medical Image Computing and Computer Assisted Intervention Society multimodal brain segmentation challenge (BRATS 2012 and 2013) against a similar fully automatic method based on CRF and a semiautomatic method based on grow-cut, and our method shows superior performance.
NASA Astrophysics Data System (ADS)
Fernández-Manso, O.; Fernández-Manso, A.; Quintano, C.
2014-09-01
Aboveground biomass (AGB) estimation from optical satellite data is usually based on regression models of original or synthetic bands. To overcome the poor relation between AGB and spectral bands due to mixed-pixels when a medium spatial resolution sensor is considered, we propose to base the AGB estimation on fraction images from Linear Spectral Mixture Analysis (LSMA). Our study area is a managed Mediterranean pine woodland (Pinus pinaster Ait.) in central Spain. A total of 1033 circular field plots were used to estimate AGB from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) optical data. We applied Pearson correlation statistics and stepwise multiple regression to identify suitable predictors from the set of variables of original bands, fraction imagery, Normalized Difference Vegetation Index and Tasselled Cap components. Four linear models and one nonlinear model were tested. A linear combination of ASTER band 2 (red, 0.630-0.690 μm), band 8 (short wave infrared 5, 2.295-2.365 μm) and green vegetation fraction (from LSMA) was the best AGB predictor (Radj2=0.632, the root-mean-squared error of estimated AGB was 13.3 Mg ha-1 (or 37.7%), resulting from cross-validation), rather than other combinations of the above cited independent variables. Results indicated that using ASTER fraction images in regression models improves the AGB estimation in Mediterranean pine forests. The spatial distribution of the estimated AGB, based on a multiple linear regression model, may be used as baseline information for forest managers in future studies, such as quantifying the regional carbon budget, fuel accumulation or monitoring of management practices.
Bruce, Iain P.; Karaman, M. Muge; Rowe, Daniel B.
2012-01-01
The acquisition of sub-sampled data from an array of receiver coils has become a common means of reducing data acquisition time in MRI. Of the various techniques used in parallel MRI, SENSitivity Encoding (SENSE) is one of the most common, making use of a complex-valued weighted least squares estimation to unfold the aliased images. It was recently shown in Bruce et al. [Magn. Reson. Imag. 29(2011):1267–1287] that when the SENSE model is represented in terms of a real-valued isomorphism, it assumes a skew-symmetric covariance between receiver coils, as well as an identity covariance structure between voxels. In this manuscript, we show that not only is the skew-symmetric coil covariance unlike that of real data, but the estimated covariance structure between voxels over a time series of experimental data is not an identity matrix. As such, a new model, entitled SENSE-ITIVE, is described with both revised coil and voxel covariance structures. Both the SENSE and SENSE-ITIVE models are represented in terms of real-valued isomorphisms, allowing for a statistical analysis of reconstructed voxel means, variances, and correlations resulting from the use of different coil and voxel covariance structures used in the reconstruction processes to be conducted. It is shown through both theoretical and experimental illustrations that the miss-specification of the coil and voxel covariance structures in the SENSE model results in a lower standard deviation in each voxel of the reconstructed images, and thus an artificial increase in SNR, compared to the standard deviation and SNR of the SENSE-ITIVE model where both the coil and voxel covariances are appropriately accounted for. It is also shown that there are differences in the correlations induced by the reconstruction operations of both models, and consequently there are differences in the correlations estimated throughout the course of reconstructed time series. These differences in correlations could result in meaningful differences in interpretation of results. PMID:22617147
Ertas, Gokhan; Onaygil, Can; Akin, Yasin; Kaya, Handan; Aribal, Erkin
2016-12-01
To investigate the accuracy of diffusion coefficients and diffusion coefficient ratios of breast lesions and of glandular breast tissue from mono- and stretched-exponential models for quantitative diagnosis in diffusion-weighted magnetic resonance imaging (MRI). We analyzed pathologically confirmed 170 lesions (85 benign and 85 malignant) imaged using a 3.0T MR scanner. Small regions of interest (ROIs) focusing on the highest signal intensity for lesions and also for glandular tissue of contralateral breast were obtained. Apparent diffusion coefficient (ADC) and distributed diffusion coefficient (DDC) were estimated by performing nonlinear fittings using mono- and stretched-exponential models, respectively. Coefficient ratios were calculated by dividing the lesion coefficient by the glandular tissue coefficient. A stretched exponential model provides significantly better fits then the monoexponential model (P < 0.001): 65% of the better fits for glandular tissue and 71% of the better fits for lesion. High correlation was found in diffusion coefficients (0.99-0.81 and coefficient ratios (0.94) between the models. The highest diagnostic accuracy was found by the DDC ratio (area under the curve [AUC] = 0.93) when compared with lesion DDC, ADC ratio, and lesion ADC (AUC = 0.91, 0.90, 0.90) but with no statistically significant difference (P > 0.05). At optimal thresholds, the DDC ratio achieves 93% sensitivity, 80% specificity, and 87% overall diagnostic accuracy, while ADC ratio leads to 89% sensitivity, 78% specificity, and 83% overall diagnostic accuracy. The stretched exponential model fits better with signal intensity measurements from both lesion and glandular tissue ROIs. Although the DDC ratio estimated by using the model shows a higher diagnostic accuracy than the ADC ratio, lesion DDC, and ADC, it is not statistically significant. J. Magn. Reson. Imaging 2016;44:1633-1641. © 2016 International Society for Magnetic Resonance in Medicine.
Evaluation of the flame propagation within an SI engine using flame imaging and LES
NASA Astrophysics Data System (ADS)
He, Chao; Kuenne, Guido; Yildar, Esra; van Oijen, Jeroen; di Mare, Francesca; Sadiki, Amsini; Ding, Carl-Philipp; Baum, Elias; Peterson, Brian; Böhm, Benjamin; Janicka, Johannes
2017-11-01
This work shows experiments and simulations of the fired operation of a spark ignition engine with port-fuelled injection. The test rig considered is an optically accessible single cylinder engine specifically designed at TU Darmstadt for the detailed investigation of in-cylinder processes and model validation. The engine was operated under lean conditions using iso-octane as a substitute for gasoline. Experiments have been conducted to provide a sound database of the combustion process. A planar flame imaging technique has been applied within the swirl- and tumble-planes to provide statistical information on the combustion process to complement a pressure-based comparison between simulation and experiments. This data is then analysed and used to assess the large eddy simulation performed within this work. For the simulation, the engine code KIVA has been extended by the dynamically thickened flame model combined with chemistry reduction by means of pressure dependent tabulation. Sixty cycles have been simulated to perform a statistical evaluation. Based on a detailed comparison with the experimental data, a systematic study has been conducted to obtain insight into the most crucial modelling uncertainties.
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng Guoyan
2010-04-15
Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less
Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.
ERIC Educational Resources Information Center
Wang, James Z.; Du, Yanping
Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…
Blind image quality assessment based on aesthetic and statistical quality-aware features
NASA Astrophysics Data System (ADS)
Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi
2017-07-01
The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.
A method to estimate the effect of deformable image registration uncertainties on daily dose mapping
Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin
2012-01-01
Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766
A computerized MRI biomarker quantification scheme for a canine model of Duchenne muscular dystrophy
Wang, Jiahui; Fan, Zheng; Vandenborne, Krista; Walter, Glenn; Shiloh-Malawsky, Yael; An, Hongyu; Kornegay, Joe N.; Styner, Martin A.
2015-01-01
Purpose Golden retriever muscular dystrophy (GRMD) is a widely used canine model of Duchenne muscular dystrophy (DMD). Recent studies have shown that magnetic resonance imaging (MRI) can be used to non-invasively detect consistent changes in both DMD and GRMD. In this paper, we propose a semi-automated system to quantify MRI biomarkers of GRMD. Methods Our system was applied to a database of 45 MRI scans from 8 normal and 10 GRMD dogs in a longitudinal natural history study. We first segmented six proximal pelvic limb muscles using two competing schemes: 1) standard, limited muscle range segmentation and 2) semi-automatic full muscle segmentation. We then performed pre-processing, including: intensity inhomogeneity correction, spatial registration of different image sequences, intensity calibration of T2-weighted (T2w) and T2-weighted fat suppressed (T2fs) images, and calculation of MRI biomarker maps. Finally, for each of the segmented muscles, we automatically measured MRI biomarkers of muscle volume and intensity statistics over MRI biomarker maps, and statistical image texture features. Results The muscle volume and the mean intensities in T2 value, fat, and water maps showed group differences between normal and GRMD dogs. For the statistical texture biomarkers, both the histogram and run-length matrix features showed obvious group differences between normal and GRMD dogs. The full muscle segmentation shows significantly less error and variability in the proposed biomarkers when compared to the standard, limited muscle range segmentation. Conclusion The experimental results demonstrated that this quantification tool can reliably quantify MRI biomarkers in GRMD dogs, suggesting that it would also be useful for quantifying disease progression and measuring therapeutic effect in DMD patients. PMID:23299128
Pouch, Alison M; Vergnat, Mathieu; McGarvey, Jeremy R; Ferrari, Giovanni; Jackson, Benjamin M; Sehgal, Chandra M; Yushkevich, Paul A; Gorman, Robert C; Gorman, Joseph H
2014-01-01
The basis of mitral annuloplasty ring design has progressed from qualitative surgical intuition to experimental and theoretical analysis of annular geometry with quantitative imaging techniques. In this work, we present an automated three-dimensional (3D) echocardiographic image analysis method that can be used to statistically assess variability in normal mitral annular geometry to support advancement in annuloplasty ring design. Three-dimensional patient-specific models of the mitral annulus were automatically generated from 3D echocardiographic images acquired from subjects with normal mitral valve structure and function. Geometric annular measurements including annular circumference, annular height, septolateral diameter, intercommissural width, and the annular height to intercommissural width ratio were automatically calculated. A mean 3D annular contour was computed, and principal component analysis was used to evaluate variability in normal annular shape. The following mean ± standard deviations were obtained from 3D echocardiographic image analysis: annular circumference, 107.0 ± 14.6 mm; annular height, 7.6 ± 2.8 mm; septolateral diameter, 28.5 ± 3.7 mm; intercommissural width, 33.0 ± 5.3 mm; and annular height to intercommissural width ratio, 22.7% ± 6.9%. Principal component analysis indicated that shape variability was primarily related to overall annular size, with more subtle variation in the skewness and height of the anterior annular peak, independent of annular diameter. Patient-specific 3D echocardiographic-based modeling of the human mitral valve enables statistical analysis of physiologically normal mitral annular geometry. The tool can potentially lead to the development of a new generation of annuloplasty rings that restore the diseased mitral valve annulus back to a truly normal geometry. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ushenko, Yu. A.; Wanchuliak, O. Y.
2013-06-01
The optical model of polycrystalline networks of myocardium protein fibrils is presented. The technique of determining the coordinate distribution of polarization azimuth of the points of laser images of myocardium histological sections is suggested. The results of investigating the interrelation between the values of statistical (statistical moments of the 1st-4th order) parameters are presented which characterize distributions of wavelet-coefficients polarization maps of myocardium layers and death reasons.
Common lines modeling for reference free Ab-initio reconstruction in cryo-EM.
Greenberg, Ido; Shkolnisky, Yoel
2017-11-01
We consider the problem of estimating an unbiased and reference-free ab initio model for non-symmetric molecules from images generated by single-particle cryo-electron microscopy. The proposed algorithm finds the globally optimal assignment of orientations that simultaneously respects all common lines between all images. The contribution of each common line to the estimated orientations is weighted according to a statistical model for common lines' detection errors. The key property of the proposed algorithm is that it finds the global optimum for the orientations given the common lines. In particular, any local optima in the common lines energy landscape do not affect the proposed algorithm. As a result, it is applicable to thousands of images at once, very robust to noise, completely reference free, and not biased towards any initial model. A byproduct of the algorithm is a set of measures that allow to asses the reliability of the obtained ab initio model. We demonstrate the algorithm using class averages from two experimental data sets, resulting in ab initio models with resolutions of 20Å or better, even from class averages consisting of as few as three raw images per class. Copyright © 2017 Elsevier Inc. All rights reserved.
Thygesen, Jesper; Gerke, Oke; Egstrup, Kenneth; Waaler, Dag; Lambrechtsen, Jess
2016-01-01
Background Coronary computed tomography angiography (CCTA) requires high spatial and temporal resolution, increased low contrast resolution for the assessment of coronary artery stenosis, plaque detection, and/or non-coronary pathology. Therefore, new reconstruction algorithms, particularly iterative reconstruction (IR) techniques, have been developed in an attempt to improve image quality with no cost in radiation exposure. Purpose To evaluate whether adaptive statistical iterative reconstruction (ASIR) enhances perceived image quality in CCTA compared to filtered back projection (FBP). Material and Methods Thirty patients underwent CCTA due to suspected coronary artery disease. Images were reconstructed using FBP, 30% ASIR, and 60% ASIR. Ninety image sets were evaluated by five observers using the subjective visual grading analysis (VGA) and assessed by proportional odds modeling. Objective quality assessment (contrast, noise, and the contrast-to-noise ratio [CNR]) was analyzed with linear mixed effects modeling on log-transformed data. The need for ethical approval was waived by the local ethics committee as the study only involved anonymously collected clinical data. Results VGA showed significant improvements in sharpness by comparing FBP with ASIR, resulting in odds ratios of 1.54 for 30% ASIR and 1.89 for 60% ASIR (P = 0.004). The objective measures showed significant differences between FBP and 60% ASIR (P < 0.0001) for noise, with an estimated ratio of 0.82, and for CNR, with an estimated ratio of 1.26. Conclusion ASIR improved the subjective image quality of parameter sharpness and, objectively, reduced noise and increased CNR. PMID:28405477
Sparsity-based Poisson denoising with dictionary learning.
Giryes, Raja; Elad, Michael
2014-12-01
The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR.
Steganalysis based on reducing the differences of image statistical characteristics
NASA Astrophysics Data System (ADS)
Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao
2018-04-01
Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
Context dependent anti-aliasing image reconstruction
NASA Technical Reports Server (NTRS)
Beaudet, Paul R.; Hunt, A.; Arlia, N.
1989-01-01
Image Reconstruction has been mostly confined to context free linear processes; the traditional continuum interpretation of digital array data uses a linear interpolator with or without an enhancement filter. Here, anti-aliasing context dependent interpretation techniques are investigated for image reconstruction. Pattern classification is applied to each neighborhood to assign it a context class; a different interpolation/filter is applied to neighborhoods of differing context. It is shown how the context dependent interpolation is computed through ensemble average statistics using high resolution training imagery from which the lower resolution image array data is obtained (simulation). A quadratic least squares (LS) context-free image quality model is described from which the context dependent interpolation coefficients are derived. It is shown how ensembles of high-resolution images can be used to capture the a priori special character of different context classes. As a consequence, a priori information such as the translational invariance of edges along the edge direction, edge discontinuity, and the character of corners is captured and can be used to interpret image array data with greater spatial resolution than would be expected by the Nyquist limit. A Gibb-like artifact associated with this super-resolution is discussed. More realistic context dependent image quality models are needed and a suggestion is made for using a quality model which now is finding application in data compression.
NASA Astrophysics Data System (ADS)
Shao, Yang
This research focuses on the application of remote sensing, geographic information systems, statistical modeling, and spatial analysis to examine the dynamics of urban land cover, urban structure, and population-environment interactions in Bangkok, Thailand, with an emphasis on rural-to-urban migration from rural Nang Rong District, Northeast Thailand to the primate city of Bangkok. The dissertation consists of four main sections: (1) development of remote sensing image classification and change-detection methods for characterizing imperviousness for Bangkok, Thailand from 1993-2002; (2) development of 3-D urban mapping methods, using high spatial resolution IKONOS satellite images, to assess high-rises and other urban structures; (3) assessment of urban spatial structure from 2-D and 3-D perspectives; and (4) an analysis of the spatial clustering of migrants from Nang Rong District in Bangkok and the neighborhood environments of migrants' locations. Techniques are developed to improve the accuracy of the neural network classification approach for the analysis of remote sensing data, with an emphasis on the spectral unmixing problem. The 3-D building heights are derived using the shadow information on the high-resolution IKONOS image. The results from the 2-D and 3-D mapping are further examined to assess urban structure and urban feature identification. This research contributes to image processing of remotely-sensed images and urban studies. The rural-urban migration process and migrants' settlement patterns are examined using spatial statistics, GIS, and remote sensing perspectives. The results show that migrants' spatial clustering in urban space is associated with the source village and a number of socio-demographic variables. In addition, the migrants' neighborhood environments in urban setting are modeled using a set of geographic and socio-demographic variables, and the results are scale-dependent.
Markov random field model-based edge-directed image interpolation.
Li, Min; Nguyen, Truong Q
2008-07-01
This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.
G0-WISHART Distribution Based Classification from Polarimetric SAR Images
NASA Astrophysics Data System (ADS)
Hu, G. C.; Zhao, Q. H.
2017-09-01
Enormous scientific and technical developments have been carried out to further improve the remote sensing for decades, particularly Polarimetric Synthetic Aperture Radar(PolSAR) technique, so classification method based on PolSAR images has getted much more attention from scholars and related department around the world. The multilook polarmetric G0-Wishart model is a more flexible model which describe homogeneous, heterogeneous and extremely heterogeneous regions in the image. Moreover, the polarmetric G0-Wishart distribution dose not include the modified Bessel function of the second kind. It is a kind of simple statistical distribution model with less parameter. To prove its feasibility, a process of classification has been tested with the full-polarized Synthetic Aperture Radar (SAR) image by the method. First, apply multilook polarimetric SAR data process and speckle filter to reduce speckle influence for classification result. Initially classify the image into sixteen classes by H/A/α decomposition. Using the ICM algorithm to classify feature based on the G0-Wshart distance. Qualitative and quantitative results show that the proposed method can classify polaimetric SAR data effectively and efficiently.
Bayesian Regularization for Normal Mixture Estimation and Model-Based Clustering
2005-08-04
describe a four-band magnetic resonance image (MRI) consisting of 23,712 pixels of a brain with a tumor 2. Because of the size of the dataset, it is not...the Royal Statistical Society, Series B 56, 363–375. Figueiredo, M. A. T. and A. K. Jain (2002). Unsupervised learning of finite mixture models. IEEE...20 5.4 Brain MRI
NASA Astrophysics Data System (ADS)
Liu, Shengnan; Eggermont, Jeroen; Wolterbeek, Ron; Broersen, Alexander; Busk, Carol A. G. R.; Precht, Helle; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke
2016-12-01
Intravascular optical coherence tomography (IVOCT) is an imaging technique that is used to analyze the underlying cause of cardiovascular disease. Because a catheter is used during imaging, the intensities can be affected by the catheter position. This work aims to analyze the effect of the catheter position on IVOCT image intensities and to propose a compensation method to minimize this effect in order to improve the visualization and the automatic analysis of IVOCT images. The effect of catheter position is modeled with respect to the distance between the catheter and the arterial wall (distance-dependent factor) and the incident angle onto the arterial wall (angle-dependent factor). A light transmission model incorporating both factors is introduced. On the basis of this model, the interaction effect of both factors is estimated with a hierarchical multivariant linear regression model. Statistical analysis shows that IVOCT intensities are significantly affected by both factors with p<0.001, as either aspect increases the intensity decreases. This effect differs for different pullbacks. The regression results were used to compensate for this effect. Experiments show that the proposed compensation method can improve the performance of the automatic bioresorbable vascular scaffold strut detection.
NASA Astrophysics Data System (ADS)
Lorenzi, Marco; Simpson, Ivor J.; Mendelson, Alex F.; Vos, Sjoerd B.; Cardoso, M. Jorge; Modat, Marc; Schott, Jonathan M.; Ourselin, Sebastien
2016-04-01
The joint analysis of brain atrophy measured with magnetic resonance imaging (MRI) and hypometabolism measured with positron emission tomography with fluorodeoxyglucose (FDG-PET) is of primary importance in developing models of pathological changes in Alzheimer’s disease (AD). Most of the current multimodal analyses in AD assume a local (spatially overlapping) relationship between MR and FDG-PET intensities. However, it is well known that atrophy and hypometabolism are prominent in different anatomical areas. The aim of this work is to describe the relationship between atrophy and hypometabolism by means of a data-driven statistical model of non-overlapping intensity correlations. For this purpose, FDG-PET and MRI signals are jointly analyzed through a computationally tractable formulation of partial least squares regression (PLSR). The PLSR model is estimated and validated on a large clinical cohort of 1049 individuals from the ADNI dataset. Results show that the proposed non-local analysis outperforms classical local approaches in terms of predictive accuracy while providing a plausible description of disease dynamics: early AD is characterised by non-overlapping temporal atrophy and temporo-parietal hypometabolism, while the later disease stages show overlapping brain atrophy and hypometabolism spread in temporal, parietal and cortical areas.
Sadat, Umar; Howarth, Simon P S; Usman, Ammara; Tang, Tjun Y; Graves, Martin J; Gillard, Jonathan H
2013-11-01
Inflammation within atheromatous plaques is a known risk factor for plaque vulnerability. This can be detected in vivo on high-resolution magnetic resonance imaging (MRI) using ultrasmall superparamagnetic iron oxide (USPIO) contrast medium. The purpose of this study was to assess the feasibility of performing sequential USPIO studies over a 1-year period. Ten patients with moderate asymptomatic carotid stenosis underwent carotid MRI imaging both before and 36 hours after USPIO infusion at 0, 6, and 12 months. Images were manually segmented into quadrants, and the signal change per quadrant was calculated at these time points. A mixed repeated measures statistical model was used to determine signal change attributable to USPIO uptake over time. All patients remained asymptomatic during the study. The mixed model revealed no statistical difference in USPIO uptake between the 3 time points. Intraclass correlation coefficients revealed a good agreement of quadrant signal pre-USPIO infusion between 0 and 6 months (0.70) and 0 and 12 months (0.70). Good agreement of quadrant signal after USPIO infusion was shown between 0 and 6 months (0.68) and moderate agreement was shown between 0 and 12 months (0.33). USPIO-enhanced sequential MRI of atheromatous carotid plaques is clinically feasible. This may have important implications for future longitudinal studies involving pharmacologic intervention in large patient cohorts. Copyright © 2013 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Simulation of laser beam reflection at the sea surface
NASA Astrophysics Data System (ADS)
Schwenger, Frédéric; Repasi, Endre
2011-05-01
A 3D simulation of the reflection of a Gaussian shaped laser beam on the dynamic sea surface is presented. The simulation is suitable for both the calculation of images of SWIR (short wave infrared) imaging sensor and for determination of total detected power of reflected laser light for a bistatic configuration of laser source and receiver at different atmospheric conditions. Our computer simulation comprises the 3D simulation of a maritime scene (open sea/clear sky) and the simulation of laser light reflected at the sea surface. The basic sea surface geometry is modeled by a composition of smooth wind driven gravity waves. The propagation model for water waves is applied for sea surface animation. To predict the view of a camera in the spectral band SWIR the sea surface radiance must be calculated. This is done by considering the emitted sea surface radiance and the reflected sky radiance, calculated by MODTRAN. Additionally, the radiances of laser light specularly reflected at the wind-roughened sea surface are modeled in the SWIR band considering an analytical statistical sea surface BRDF (bidirectional reflectance distribution function). This BRDF model considers the statistical slope statistics of waves and accounts for slope-shadowing of waves that especially occurs at flat incident angles of the laser beam and near horizontal detection angles of reflected irradiance at rough seas. Simulation results are presented showing the variation of the detected laser power dependent on the geometric configuration of laser, sensor and wind characteristics.
Wang, Yunlong; Ji, Jun; Jiang, Changsong; Huang, Zengyue
2015-04-01
This study was aimed to use the method of modulation transfer function (MTF) to compare image quality among three different Olympus medical rigid cystoscopes in an in vitro model. During the experimental processes, we firstly used three different types of cystoscopes (i. e. OLYMPUS cystourethroscopy with FOV of 12 degrees, OLYMPUS Germany A22003A and OLYMPUS A2013A) to collect raster images at different brightness with industrial camera and computer from the resolution target which is with different spatial frequency, and then we processed the collected images using MALAB software with the optical transfer function MTF to obtain the values of MTF at different brightness and different spatial frequency. We then did data mathematical statistics and compared imaging quality. The statistical data showed that all three MTF values were smaller than 1. MTF values with the spatial frequency gradually increasing would decrease approaching 0 at the same brightness. When the brightness enhanced in the same process at the same spatial frequency, MTF values showed a slowly increasing trend. The three endoscopes' MTF values were completely different. In some cases the MTF values had a large difference, and the maximum difference could reach 0.7. Conclusion can be derived from analysis of experimental data that three Olympus medical rigid cystoscopes have completely different imaging quality abilities. The No. 3 endoscope OLYMPUS A2013A has low resolution but high contrast. The No. 1 endoscope OLYMPUS cystourethroscopy with FOV of 12 degrees, on the contrary, had high resolution and lower contrast. The No. 2 endoscope OLYMPUS Germany A22003A had high contrast and high resolution, and its image quality was the best.
Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven
2012-01-01
The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921
Modeling the Effects of Solar Cell Distribution on Optical Cross Section for Solar Panel Simulation
2012-09-01
cell material. The solar panel was created as a CAD model and simulated with the imaging facility parameters with TASAT. TASAT uses a BRDF to apply...1 MODELING THE EFFECTS OF SOLAR CELL DISTRIBUTION ON OPTICAL CROSS SECTION FOR SOLAR PANEL SIMULATION Kelly Feirstine Meiling Klein... model of a solar panel with various solar cell tip and tilt distribution statistics. Modeling a solar panel as a single sheet of “solar cell” material
Temporal Variability and Statistics of the Strehl Ratio in Adaptive-Optics Images
2010-01-01
with the appropriate models and the residuals were extracted. This was done using the ARIMA modelling (Box & Jenkins 1970). ARIMA stands for...It was used here for the opposite goal – to obtain the values of the i.i.d. “noise” and test its distribution. Mixed ARIMA models of order 2 were...often sufficient to ensure non- significant autocorrelation of the residuals. Table 2 lists the stationary sequences with their respective ARIMA models
Investigation of Local Ordering in Amorphous Materials.
NASA Astrophysics Data System (ADS)
Fan, Gary Guoyou
The intent of the research described in this dissertation, as indicated by the title, is to provide a better understanding of the structure of amorphous material. The possibility of using electron microscopy to study the amorphous structure is investigated. Chapter 1 gives a brief introduction to the understanding and modeling of the amorphous structure, electron microscopy and the image analysis in general. The difficulty of using 2-D images to infer 3-D structures information is illustrated in Chapter 2, where it is shown that some high resolution images are not qualitatively different from images of white -noises weak-phase objects or those of random atomic arrangements. The means of obtaining statistical information from these images is given in Chapters 3 and 5, where the quantitative differences between experimental images and simulated white-noise or simulated images corresponding to random arrangements are revealed. The use of image processing techniques in electron microscopy and the possible artifacts are presented in Chapter 4. The pattern recognition technique outlined in Chapter 6 demonstrates a feasible mode of scanning transition electron microscope operation. Statistical analysis can be effectively performed on a large number of nano-diffraction patterns from, for example, locally ordered samples. Some recent developments in physics as well as in electron microscopy are briefly reviewed, and their possible applications in the study of amorphous structures are discussed in Chapter 7.
Model observer design for multi-signal detection in the presence of anatomical noise
NASA Astrophysics Data System (ADS)
Wen, Gezheng; Markey, Mia K.; Park, Subok
2017-02-01
As psychophysical studies are resource-intensive to conduct, model observers are commonly used to assess and optimize medical imaging quality. Model observers are typically designed to detect at most one signal. However, in clinical practice, there may be multiple abnormalities in a single image set (e.g. multifocal multicentric (MFMC) breast cancer), which can impact treatment planning. Prevalence of signals can be different across anatomical regions, and human observers do not know the number or location of signals a priori. As new imaging techniques have the potential to improve multiple-signal detection (e.g. digital breast tomosynthesis may be more effective for diagnosis of MFMC than mammography), image quality assessment approaches addressing such tasks are needed. In this study, we present a model observer to detect multiple signals in an image dataset. A novel implementation of partial least squares (PLS) was developed to estimate different sets of efficient channels directly from the images. The PLS channels are adaptive to the characteristics of signals and the background, and they capture the interactions among signal locations. Corresponding linear decision templates are employed to generate both image-level and location-specific scores on the presence of signals. Our results show that: (1) the model observer can achieve high performance with a reasonably small number of channels; (2) the model observer with PLS channels outperforms that with benchmark modified Laguerre-Gauss channels, especially when realistic signal shapes and complex background statistics are involved; (3) the tasks of clinical interest, and other constraints such as sample size would alter the optimal design of the model observer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bache, S; Liu, X; Loyer, E
Purpose: This work sought to quantify a radiology team’s assessment of image quality differences between two CT scanner models currently in clinical use, with emphasis on noise and low-contrast detectability (LCD). Methods: A water phantom and a Kagaku anthropomorphic body phantom were scanned on GE Discovery CT750 HD and LightSpeed VCT scanners (4 each) with identical scan parameters and reconstructed to 2.5mm/5.0mm thicknesses. Images of water phantom were analyzed at the scanner console with a built-in LCD tool that uses statistical methods to compute requisite CT-number contrast for 95% confidence in detection of a user-defined object size. LCD value wasmore » computed for 5mm, 3mm, and 1mm objects. Analysis of standard deviation and LCD values were performed on Kagaku phantom images within liver, stomach, and spleen. LCD value was computed for 4mm, 3mm, and 1mm objects using a benchmarked MATLAB implementation of the GE scanner-console tool. Results: Water LCD values were larger (poorer performance) for all HD scanners compared to VCT scanners. Mean scanner model difference in requisite CT-number contrast for 5mm, 3mm, and 1mm objects for 5.0mm/2.5mm images was 3.0%/3.4% (p=0.02/p=0.10), 5.3%/5.7% (0.00002/0.02), and 8.5%/8.2% (0.0004/0.002), respectively. Mean standard deviations within Kagaku phantom ROIs were greater in HD compared to VCT images, with mean differences for the liver, stomach, and spleen for 5.0mm/2.5mm of 16%/12% (p=0.04/0.10), 8%/12% (0.15/0.11), and 16%/15% (0.05/0.11), respectively. Mean LCD value difference between HD and VCT scanners over all ROIs for 4mm, 3m, and 1mm objects and 5.0mm/2.5mm was 34%/9%, 16%/8%, and 18%/10%, respectively. HD scanners outperformed VCT scanners only for the 4mm stomach object. Conclusion: Using both water and anthropomorphic phantoms, it was shown that HD scanners are outperformed by VCT scanners with respect to noise and LCD in a consistent and in most cases statistically significant manner. The relationship between statistical and clinical significance demands further work.« less
Sweeney, Elizabeth M; Shinohara, Russell T; Shiee, Navid; Mateen, Farrah J; Chudgar, Avni A; Cuzzocreo, Jennifer L; Calabresi, Peter A; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M
2013-01-01
Magnetic resonance imaging (MRI) can be used to detect lesions in the brains of multiple sclerosis (MS) patients and is essential for diagnosing the disease and monitoring its progression. In practice, lesion load is often quantified by either manual or semi-automated segmentation of MRI, which is time-consuming, costly, and associated with large inter- and intra-observer variability. We propose OASIS is Automated Statistical Inference for Segmentation (OASIS), an automated statistical method for segmenting MS lesions in MRI studies. We use logistic regression models incorporating multiple MRI modalities to estimate voxel-level probabilities of lesion presence. Intensity-normalized T1-weighted, T2-weighted, fluid-attenuated inversion recovery and proton density volumes from 131 MRI studies (98 MS subjects, 33 healthy subjects) with manual lesion segmentations were used to train and validate our model. Within this set, OASIS detected lesions with a partial area under the receiver operating characteristic curve for clinically relevant false positive rates of 1% and below of 0.59% (95% CI; [0.50%, 0.67%]) at the voxel level. An experienced MS neuroradiologist compared these segmentations to those produced by LesionTOADS, an image segmentation software that provides segmentation of both lesions and normal brain structures. For lesions, OASIS out-performed LesionTOADS in 74% (95% CI: [65%, 82%]) of cases for the 98 MS subjects. To further validate the method, we applied OASIS to 169 MRI studies acquired at a separate center. The neuroradiologist again compared the OASIS segmentations to those from LesionTOADS. For lesions, OASIS ranked higher than LesionTOADS in 77% (95% CI: [71%, 83%]) of cases. For a randomly selected subset of 50 of these studies, one additional radiologist and one neurologist also scored the images. Within this set, the neuroradiologist ranked OASIS higher than LesionTOADS in 76% (95% CI: [64%, 88%]) of cases, the neurologist 66% (95% CI: [52%, 78%]) and the radiologist 52% (95% CI: [38%, 66%]). OASIS obtains the estimated probability for each voxel to be part of a lesion by weighting each imaging modality with coefficient weights. These coefficients are explicit, obtained using standard model fitting techniques, and can be reused in other imaging studies. This fully automated method allows sensitive and specific detection of lesion presence and may be rapidly applied to large collections of images.
Sojoudi, Alireza; Goodyear, Bradley G
2016-12-01
Spontaneous fluctuations of blood-oxygenation level-dependent functional magnetic resonance imaging (BOLD fMRI) signals are highly synchronous between brain regions that serve similar functions. This provides a means to investigate functional networks; however, most analysis techniques assume functional connections are constant over time. This may be problematic in the case of neurological disease, where functional connections may be highly variable. Recently, several methods have been proposed to determine moment-to-moment changes in the strength of functional connections over an imaging session (so called dynamic connectivity). Here a novel analysis framework based on a hierarchical observation modeling approach was proposed, to permit statistical inference of the presence of dynamic connectivity. A two-level linear model composed of overlapping sliding windows of fMRI signals, incorporating the fact that overlapping windows are not independent was described. To test this approach, datasets were synthesized whereby functional connectivity was either constant (significant or insignificant) or modulated by an external input. The method successfully determines the statistical significance of a functional connection in phase with the modulation, and it exhibits greater sensitivity and specificity in detecting regions with variable connectivity, when compared with sliding-window correlation analysis. For real data, this technique possesses greater reproducibility and provides a more discriminative estimate of dynamic connectivity than sliding-window correlation analysis. Hum Brain Mapp 37:4566-4580, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
De Backer, A; Martinez, G T; Rosenauer, A; Van Aert, S
2013-11-01
In the present paper, a statistical model-based method to count the number of atoms of monotype crystalline nanostructures from high resolution high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) images is discussed in detail together with a thorough study on the possibilities and inherent limitations. In order to count the number of atoms, it is assumed that the total scattered intensity scales with the number of atoms per atom column. These intensities are quantitatively determined using model-based statistical parameter estimation theory. The distribution describing the probability that intensity values are generated by atomic columns containing a specific number of atoms is inferred on the basis of the experimental scattered intensities. Finally, the number of atoms per atom column is quantified using this estimated probability distribution. The number of atom columns available in the observed STEM image, the number of components in the estimated probability distribution, the width of the components of the probability distribution, and the typical shape of a criterion to assess the number of components in the probability distribution directly affect the accuracy and precision with which the number of atoms in a particular atom column can be estimated. It is shown that single atom sensitivity is feasible taking the latter aspects into consideration. © 2013 Elsevier B.V. All rights reserved.
Urani, C; Corvi, R; Callegaro, G; Stefanini, F M
2013-09-01
In vitro cell transformation assays (CTAs) have been shown to model important stages of in vivo carcinogenesis and have the potential to predict carcinogenicity in humans. Advantages of CTAs are their ability of revealing both genotoxic and non-genotoxic carcinogens while reducing both experimental costs and the number of animals used. The endpoint of the CTA is foci formation, and requires classification under light microscopy based on morphology. Thus current limitations for the wide adoption of the assay partially depend on a fair degree of subjectivity in foci scoring. An objective evaluation may be obtained after separating foci from background monolayer in the digital image, and quantifying values of statistical descriptors which are selected to capture eye-scored morphological features. The aim of this study was to develop statistical descriptors to be applied to transformed foci of BALB/c 3T3, which cover foci size, multilayering and invasive cell growth into the background monolayer. Proposed descriptors were applied to a database of 407 foci images to explore the numerical features, and to illustrate open problems and potential solutions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Staging Liver Fibrosis with Statistical Observers
NASA Astrophysics Data System (ADS)
Brand, Jonathan Frieman
Chronic liver disease is a worldwide health problem, and hepatic fibrosis (HF) is one of the hallmarks of the disease. Pathology diagnosis of HF is based on textural change in the liver as a lobular collagen network that develops within portal triads. The scale of collagen lobules is characteristically on order of 1mm, which close to the resolution limit of in vivo Gd-enhanced MRI. In this work the methods to collect training and testing images for a Hotelling observer are covered. An observer based on local texture analysis is trained and tested using wet-tissue phantoms. The technique is used to optimize the MRI sequence based on task performance. The final method developed is a two stage model observer to classify fibrotic and healthy tissue in both phantoms and in vivo MRI images. The first stage observer tests for the presence of local texture. Test statistics from the first observer are used to train the second stage observer to globally sample the local observer results. A decision of the disease class is made for an entire MRI image slice using test statistics collected from the second observer. The techniques are tested on wet-tissue phantoms and in vivo clinical patient data.
Li, Xingyu; Plataniotis, Konstantinos N
2015-07-01
In digital histopathology, tasks of segmentation and disease diagnosis are achieved by quantitative analysis of image content. However, color variation in image samples makes it challenging to produce reliable results. This paper introduces a complete normalization scheme to address the problem of color variation in histopathology images jointly caused by inconsistent biopsy staining and nonstandard imaging condition. Method : Different from existing normalization methods that either address partial cause of color variation or lump them together, our method identifies causes of color variation based on a microscopic imaging model and addresses inconsistency in biopsy imaging and staining by an illuminant normalization module and a spectral normalization module, respectively. In evaluation, we use two public datasets that are representative of histopathology images commonly received in clinics to examine the proposed method from the aspects of robustness to system settings, performance consistency against achromatic pixels, and normalization effectiveness in terms of histological information preservation. As the saturation-weighted statistics proposed in this study generates stable and reliable color cues for stain normalization, our scheme is robust to system parameters and insensitive to image content and achromatic colors. Extensive experimentation suggests that our approach outperforms state-of-the-art normalization methods as the proposed method is the only approach that succeeds to preserve histological information after normalization. The proposed color normalization solution would be useful to mitigate effects of color variation in pathology images on subsequent quantitative analysis.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
Identification of Water Bodies in a Landsat 8 OLI Image Using a J48 Decision Tree.
Acharya, Tri Dev; Lee, Dong Ha; Yang, In Tae; Lee, Jae Kang
2016-07-12
Water bodies are essential to humans and other forms of life. Identification of water bodies can be useful in various ways, including estimation of water availability, demarcation of flooded regions, change detection, and so on. In past decades, Landsat satellite sensors have been used for land use classification and water body identification. Due to the introduction of a New Operational Land Imager (OLI) sensor on Landsat 8 with a high spectral resolution and improved signal-to-noise ratio, the quality of imagery sensed by Landsat 8 has improved, enabling better characterization of land cover and increased data size. Therefore, it is necessary to explore the most appropriate and practical water identification methods that take advantage of the improved image quality and use the fewest inputs based on the original OLI bands. The objective of the study is to explore the potential of a J48 decision tree (JDT) in identifying water bodies using reflectance bands from Landsat 8 OLI imagery. J48 is an open-source decision tree. The test site for the study is in the Northern Han River Basin, which is located in Gangwon province, Korea. Training data with individual bands were used to develop the JDT model and later applied to the whole study area. The performance of the model was statistically analysed using the kappa statistic and area under the curve (AUC). The results were compared with five other known water identification methods using a confusion matrix and related statistics. Almost all the methods showed high accuracy, and the JDT was successfully applied to the OLI image using only four bands, where the new additional deep blue band of OLI was found to have the third highest information gain. Thus, the JDT can be a good method for water body identification based on images with improved resolution and increased size.
Cellular neural network-based hybrid approach toward automatic image registration
NASA Astrophysics Data System (ADS)
Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar
2013-01-01
Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.
NASA Astrophysics Data System (ADS)
Shi, Aiye; Wang, Chao; Shen, Shaohong; Huang, Fengchen; Ma, Zhenli
2016-10-01
Chi-squared transform (CST), as a statistical method, can describe the difference degree between vectors. The CST-based methods operate directly on information stored in the difference image and are simple and effective methods for detecting changes in remotely sensed images that have been registered and aligned. However, the technique does not take spatial information into consideration, which leads to much noise in the result of change detection. An improved unsupervised change detection method is proposed based on spatial constraint CST (SCCST) in combination with a Markov random field (MRF) model. First, the mean and variance matrix of the difference image of bitemporal images are estimated by an iterative trimming method. In each iteration, spatial information is injected to reduce scattered changed points (also known as "salt and pepper" noise). To determine the key parameter confidence level in the SCCST method, a pseudotraining dataset is constructed to estimate the optimal value. Then, the result of SCCST, as an initial solution of change detection, is further improved by the MRF model. The experiments on simulated and real multitemporal and multispectral images indicate that the proposed method performs well in comprehensive indices compared with other methods.
Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins
2014-11-01
Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.
Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre
2012-01-01
Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961
PSF estimation for defocus blurred image based on quantum back-propagation neural network
NASA Astrophysics Data System (ADS)
Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang
2010-11-01
Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.
Yang, Zhen; Bogovic, John A; Carass, Aaron; Ye, Mao; Searson, Peter C; Prince, Jerry L
2013-03-13
With the rapid development of microscopy for cell imaging, there is a strong and growing demand for image analysis software to quantitatively study cell morphology. Automatic cell segmentation is an important step in image analysis. Despite substantial progress, there is still a need to improve the accuracy, efficiency, and adaptability to different cell morphologies. In this paper, we propose a fully automatic method for segmenting cells in fluorescence images of confluent cell monolayers. This method addresses several challenges through a combination of ideas. 1) It realizes a fully automatic segmentation process by first detecting the cell nuclei as initial seeds and then using a multi-object geometric deformable model (MGDM) for final segmentation. 2) To deal with different defects in the fluorescence images, the cell junctions are enhanced by applying an order-statistic filter and principal curvature based image operator. 3) The final segmentation using MGDM promotes robust and accurate segmentation results, and guarantees no overlaps and gaps between neighboring cells. The automatic segmentation results are compared with manually delineated cells, and the average Dice coefficient over all distinguishable cells is 0.88.
LORETA imaging of P300 in schizophrenia with individual MRI and 128-channel EEG.
Pae, Ji Soo; Kwon, Jun Soo; Youn, Tak; Park, Hae-Jeong; Kim, Myung Sun; Lee, Boreom; Park, Kwang Suk
2003-11-01
We investigated the characteristics of P300 generators in schizophrenics by using voxel-based statistical parametric mapping of current density images. P300 generators, produced by a rare target tone of 1500 Hz (15%) under a frequent nontarget tone of 1000 Hz (85%), were measured in 20 right-handed schizophrenics and 21 controls. Low-resolution electromagnetic tomography (LORETA), using a realistic head model of the boundary element method based on individual MRI, was applied to the 128-channel EEG. Three-dimensional current density images were reconstructed from the LORETA intensity maps that covered the whole cortical gray matter. Spatial normalization and intensity normalization of the smoothed current density images were used to reduce anatomical variance and subject-specific global activity and statistical parametric mapping (SPM) was applied for the statistical analysis. We found that the sources of P300 were consistently localized at the left superior parietal area in normal subjects, while those of schizophrenics were diversely distributed. Upon statistical comparison, schizophrenics, with globally reduced current densities, showed a significant P300 current density reduction in the left medial temporal area and in the left inferior parietal area, while both left prefrontal and right orbitofrontal areas were relatively activated. The left parietotemporal area was found to correlate negatively with Positive and Negative Syndrome Scale total scores of schizophrenic patients. In conclusion, the reduced and increased areas of current density in schizophrenic patients suggest that the medial temporal and frontal areas contribute to the pathophysiology of schizophrenia, the frontotemporal circuitry abnormality.
Thoracic respiratory motion estimation from MRI using a statistical model and a 2-D image navigator.
King, A P; Buerger, C; Tsoumpas, C; Marsden, P K; Schaeffter, T
2012-01-01
Respiratory motion models have potential application for estimating and correcting the effects of motion in a wide range of applications, for example in PET-MR imaging. Given that motion cycles caused by breathing are only approximately repeatable, an important quality of such models is their ability to capture and estimate the intra- and inter-cycle variability of the motion. In this paper we propose and describe a technique for free-form nonrigid respiratory motion correction in the thorax. Our model is based on a principal component analysis of the motion states encountered during different breathing patterns, and is formed from motion estimates made from dynamic 3-D MRI data. We apply our model using a data-driven technique based on a 2-D MRI image navigator. Unlike most previously reported work in the literature, our approach is able to capture both intra- and inter-cycle motion variability. In addition, the 2-D image navigator can be used to estimate how applicable the current motion model is, and hence report when more imaging data is required to update the model. We also use the motion model to decide on the best positioning for the image navigator. We validate our approach using MRI data acquired from 10 volunteers and demonstrate improvements of up to 40.5% over other reported motion modelling approaches, which corresponds to 61% of the overall respiratory motion present. Finally we demonstrate one potential application of our technique: MRI-based motion correction of real-time PET data for simultaneous PET-MRI acquisition. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Hasselmann, Klaus; Hasselmann, Susanne; Bauer, Eva; Bruening, Claus; Lehner, Susanne; Graber, Hans; Lionello, Piero
1988-01-01
The applicability of ERS-1 wind and wave data for wave models was studied using the WAM third generation wave model and SEASAT altimeter, scatterometer and SAR data. A series of global wave hindcasts is made for the surface stress and surface wind fields by assimilation of scatterometer data for the full 96-day SEASAT and also for two wind field analyses for shorter periods by assimilation with the higher resolution ECMWF T63 model and by subjective analysis methods. It is found that wave models respond very sensitively to inconsistencies in wind field analyses and therefore provide a valuable data validation tool. Comparisons between SEASAT SAR image spectra and theoretical SAR spectra derived from the hindcast wave spectra by Monte Carlo simulations yield good overall agreement for 32 cases representing a wide variety of wave conditions. It is concluded that SAR wave imaging is sufficiently well understood to apply SAR image spectra with confidence for wave studies if supported by realistic wave models and theoretical computations of the strongly nonlinear mapping of the wave spectrum into the SAR image spectrum. A closed nonlinear integral expression for this spectral mapping relation is derived which avoids the inherent statistical errors of Monte Carlo computations and may prove to be more efficient numerically.
Evaluation of respiratory and cardiac motion correction schemes in dual gated PET/CT cardiac imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamare, F., E-mail: frederic.lamare@chu-bordeaux.fr; Fernandez, P.; CNRS, INCIA, UMR 5287, F-33400 Talence
Purpose: Cardiac imaging suffers from both respiratory and cardiac motion. One of the proposed solutions involves double gated acquisitions. Although such an approach may lead to both respiratory and cardiac motion compensation there are issues associated with (a) the combination of data from cardiac and respiratory motion bins, and (b) poor statistical quality images as a result of using only part of the acquired data. The main objective of this work was to evaluate different schemes of combining binned data in order to identify the best strategy to reconstruct motion free cardiac images from dual gated positron emission tomography (PET)more » acquisitions. Methods: A digital phantom study as well as seven human studies were used in this evaluation. PET data were acquired in list mode (LM). A real-time position management system and an electrocardiogram device were used to provide the respiratory and cardiac motion triggers registered within the LM file. Acquired data were subsequently binned considering four and six cardiac gates, or the diastole only in combination with eight respiratory amplitude gates. PET images were corrected for attenuation, but no randoms nor scatter corrections were included. Reconstructed images from each of the bins considered above were subsequently used in combination with an affine or an elastic registration algorithm to derive transformation parameters allowing the combination of all acquired data in a particular position in the cardiac and respiratory cycles. Images were assessed in terms of signal-to-noise ratio (SNR), contrast, image profile, coefficient-of-variation (COV), and relative difference of the recovered activity concentration. Results: Regardless of the considered motion compensation strategy, the nonrigid motion model performed better than the affine model, leading to higher SNR and contrast combined with a lower COV. Nevertheless, when compensating for respiration only, no statistically significant differences were observed in the performance of the two motion models considered. Superior image SNR and contrast were seen using the affine respiratory motion model in combination with the diastole cardiac bin in comparison to the use of the whole cardiac cycle. In contrast, when simultaneously correcting for cardiac beating and respiration, the elastic respiratory motion model outperformed the affine model. In this context, four cardiac bins associated with eight respiratory amplitude bins seemed to be adequate. Conclusions: Considering the compensation of respiratory motion effects only, both affine and elastic based approaches led to an accurate resizing and positioning of the myocardium. The use of the diastolic phase combined with an affine model based respiratory motion correction may therefore be a simple approach leading to significant quality improvements in cardiac PET imaging. However, the best performance was obtained with the combined correction for both cardiac and respiratory movements considering all the dual-gated bins independently through the use of an elastic model based motion compensation.« less