Sound imaging of nocturnal animal calls in their natural habitat.
Mizumoto, Takeshi; Aihara, Ikkyu; Otsuka, Takuma; Takeda, Ryu; Aihara, Kazuyuki; Okuno, Hiroshi G
2011-09-01
We present a novel method for imaging acoustic communication between nocturnal animals. Investigating the spatio-temporal calling behavior of nocturnal animals, e.g., frogs and crickets, has been difficult because of the need to distinguish many animals' calls in noisy environments without being able to see them. Our method visualizes the spatial and temporal dynamics using dozens of sound-to-light conversion devices (called "Firefly") and an off-the-shelf video camera. The Firefly, which consists of a microphone and a light emitting diode, emits light when it captures nearby sound. Deploying dozens of Fireflies in a target area, we record calls of multiple individuals through the video camera. We conduct two experiments, one indoors and the other in the field, using Japanese tree frogs (Hyla japonica). The indoor experiment demonstrates that our method correctly visualizes Japanese tree frogs' calling behavior. It has confirmed the known behavior; two frogs call synchronously or in anti-phase synchronization. The field experiment (in a rice paddy where Japanese tree frogs live) also visualizes the same calling behavior to confirm anti-phase synchronization in the field. Experimental results confirm that our method can visualize the calling behavior of nocturnal animals in their natural habitat.
Feasibility of digital imaging to characterize earth materials : part 2.
DOT National Transportation Integrated Search
2012-06-06
This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...
Feasibility of digital imaging to characterize earth materials : part 6.
DOT National Transportation Integrated Search
2012-06-06
This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...
Feasibility of digital imaging to characterize earth materials : part 3.
DOT National Transportation Integrated Search
2012-06-06
This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...
Feasibility of digital imaging to characterize earth materials : part 1.
DOT National Transportation Integrated Search
2012-06-06
This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...
Feasibility of digital imaging to characterize earth materials : part 4.
DOT National Transportation Integrated Search
2012-06-06
This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...
Feasibility of digital imaging to characterize earth materials : part 5.
DOT National Transportation Integrated Search
2012-05-06
This study demonstrated the feasibility of digital imaging to characterize earth materials. Two rapid, relatively low cost image-based methods were developed for determining the grain size distribution of soils and aggregates. The first method, calle...
Selective object encryption for privacy protection
NASA Astrophysics Data System (ADS)
Zhou, Yicong; Panetta, Karen; Cherukuri, Ravindranath; Agaian, Sos
2009-05-01
This paper introduces a new recursive sequence called the truncated P-Fibonacci sequence, its corresponding binary code called the truncated Fibonacci p-code and a new bit-plane decomposition method using the truncated Fibonacci pcode. In addition, a new lossless image encryption algorithm is presented that can encrypt a selected object using this new decomposition method for privacy protection. The user has the flexibility (1) to define the object to be protected as an object in an image or in a specific part of the image, a selected region of an image, or an entire image, (2) to utilize any new or existing method for edge detection or segmentation to extract the selected object from an image or a specific part/region of the image, (3) to select any new or existing method for the shuffling process. The algorithm can be used in many different areas such as wireless networking, mobile phone services and applications in homeland security and medical imaging. Simulation results and analysis verify that the algorithm shows good performance in object/image encryption and can withstand plaintext attacks.
The development of a super-fine-grained nuclear emulsion
NASA Astrophysics Data System (ADS)
Asada, Takashi; Naka, Tatsuhiro; Kuwabara, Ken-ichi; Yoshimoto, Masahiro
2017-06-01
A nuclear emulsion with micronized crystals is required for the tracking detection of submicron ionizing particles, which are one of the targets of dark-matter detection and other techniques. We found that a new production method, called the PVA—gelatin mixing method (PGMM), could effectively control crystal size from 20 nm to 50 nm. We called the two types of emulsion produced with the new method the nano imaging tracker and the ultra-nano imaging tracker. Their composition and spatial resolution were measured, and the results indicate that these emulsions detect extremely short tracks.
Color Histogram Diffusion for Image Enhancement
NASA Technical Reports Server (NTRS)
Kim, Taemin
2011-01-01
Various color histogram equalization (CHE) methods have been proposed to extend grayscale histogram equalization (GHE) for color images. In this paper a new method called histogram diffusion that extends the GHE method to arbitrary dimensions is proposed. Ranges in a histogram are specified as overlapping bars of uniform heights and variable widths which are proportional to their frequencies. This diagram is called the vistogram. As an alternative approach to GHE, the squared error of the vistogram from the uniform distribution is minimized. Each bar in the vistogram is approximated by a Gaussian function. Gaussian particles in the vistoram diffuse as a nonlinear autonomous system of ordinary differential equations. CHE results of color images showed that the approach is effective.
Rowlands, J A; Hunter, D M; Araj, N
1991-01-01
A new digital image readout method for electrostatic charge images on photoconductive plates is described. The method can be used to read out images on selenium plates similar to those used in xeromammography. The readout method, called the air-gap photoinduced discharge method (PID), discharges the latent image pixel by pixel and measures the charge. The PID readout method, like electrometer methods, is linear. However, the PID method permits much better resolution than scanning electrometers while maintaining quantum limited performance at high radiation exposure levels. Thus the air-gap PID method appears to be uniquely superior for high-resolution digital imaging tasks such as mammography.
An effective method on pornographic images realtime recognition
NASA Astrophysics Data System (ADS)
Wang, Baosong; Lv, Xueqiang; Wang, Tao; Wang, Chengrui
2013-03-01
In this paper, skin detection, texture filtering and face detection are used to extract feature on an image library, training them with the decision tree arithmetic to create some rules as a decision tree classifier to distinguish an unknown image. Experiment based on more than twenty thousand images, the precision rate can get 76.21% when testing on 13025 pornographic images and elapsed time is less than 0.2s. This experiment shows it has a good popularity. Among the steps mentioned above, proposing a new skin detection model which called irregular polygon region skin detection model based on YCbCr color space. This skin detection model can lower the false detection rate on skin detection. A new method called sequence region labeling on binary connected area can calculate features on connected area, it is faster and needs less memory than other recursive methods.
Contour detection improved by context-adaptive surround suppression.
Sang, Qiang; Cai, Biao; Chen, Hao
2017-01-01
Recently, many image processing applications have taken advantage of a psychophysical and neurophysiological mechanism, called "surround suppression" to extract object contour from a natural scene. However, these traditional methods often adopt a single suppression model and a fixed input parameter called "inhibition level", which needs to be manually specified. To overcome these drawbacks, we propose a novel model, called "context-adaptive surround suppression", which can automatically control the effect of surround suppression according to image local contextual features measured by a surface estimator based on a local linear kernel. Moreover, a dynamic suppression method and its stopping mechanism are introduced to avoid manual intervention. The proposed algorithm is demonstrated and validated by a broad range of experimental results.
Soil structure characterized using computed tomographic images
Zhanqi Cheng; Stephen H. Anderson; Clark J. Gantzer; J. W. Van Sambeek
2003-01-01
Fractal analysis of soil structure is a relatively new method for quantifying the effects of management systems on soil properties and quality. The objective of this work was to explore several methods of studying images to describe and quantify structure of soils under forest management. This research uses computed tomography and a topological method called Multiple...
Qian, Jianjun; Yang, Jian; Xu, Yong
2013-09-01
This paper presents a robust but simple image feature extraction method, called image decomposition based on local structure (IDLS). It is assumed that in the local window of an image, the macro-pixel (patch) of the central pixel, and those of its neighbors, are locally linear. IDLS captures the local structural information by describing the relationship between the central macro-pixel and its neighbors. This relationship is represented with the linear representation coefficients determined using ridge regression. One image is actually decomposed into a series of sub-images (also called structure images) according to a local structure feature vector. All the structure images, after being down-sampled for dimensionality reduction, are concatenated into one super-vector. Fisher linear discriminant analysis is then used to provide a low-dimensional, compact, and discriminative representation for each super-vector. The proposed method is applied to face recognition and examined using our real-world face image database, NUST-RWFR, and five popular, publicly available, benchmark face image databases (AR, Extended Yale B, PIE, FERET, and LFW). Experimental results show the performance advantages of IDLS over state-of-the-art algorithms.
NASA Astrophysics Data System (ADS)
Labate, Demetrio; Negi, Pooran; Ozcan, Burcin; Papadakis, Manos
2015-09-01
As advances in imaging technologies make more and more data available for biomedical applications, there is an increasing need to develop efficient quantitative algorithms for the analysis and processing of imaging data. In this paper, we introduce an innovative multiscale approach called Directional Ratio which is especially effective to distingush isotropic from anisotropic structures. This task is especially useful in the analysis of images of neurons, the main units of the nervous systems which consist of a main cell body called the soma and many elongated processes called neurites. We analyze the theoretical properties of our method on idealized models of neurons and develop a numerical implementation of this approach for analysis of fluorescent images of cultured neurons. We show that this algorithm is very effective for the detection of somas and the extraction of neurites in images of small circuits of neurons.
3D automatic anatomy recognition based on iterative graph-cut-ASM
NASA Astrophysics Data System (ADS)
Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.
2010-02-01
We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.
NASA Astrophysics Data System (ADS)
Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki
2016-12-01
We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
NASA Astrophysics Data System (ADS)
Xu, Lianyun; Hou, Zhende; Qin, Yuwen
2002-05-01
Because some composite material, thin film material, and biomaterial, are very thin and some of them are flexible, the classical methods for measuring their Young's moduli, by mounting extensometers on specimens, are not available. A bi-image method based on image correlation for measuring Young's moduli is developed in this paper. The measuring precision achieved is one order enhanced with general digital image correlation or called single image method. By this way, the Young's modulus of a SS301 stainless steel thin tape, with thickness 0.067mm, is measured, and the moduli of polyester fiber films, a kind of flexible sheet with thickness 0.25 mm, are also measured.
MuLoG, or How to Apply Gaussian Denoisers to Multi-Channel SAR Speckle Reduction?
Deledalle, Charles-Alban; Denis, Loic; Tabti, Sonia; Tupin, Florence
2017-09-01
Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric, or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.
Rajković, Nemanja; Krstonošić, Bojana; Milošević, Nebojša
2017-01-01
This study calls attention to the difference between traditional box-counting method and its modification. The appropriate scaling factor, influence on image size and resolution, and image rotation, as well as different image presentation, are showed on the sample of asymmetrical neurons from the monkey dentate nucleus. The standard BC method and its modification were evaluated on the sample of 2D neuronal images from the human neostriatum. In addition, three box dimensions (which estimate the space-filling property, the shape, complexity, and the irregularity of dendritic tree) were used to evaluate differences in the morphology of type III aspiny neurons between two parts of the neostriatum.
Magnetic Interactions and the Method of Images: A Wealth of Educational Suggestions
ERIC Educational Resources Information Center
Bonanno, A.; Camarca, M.; Sapia, P.
2011-01-01
Under some conditions, the method of images (well known in electrostatics) may be implemented in magnetostatic problems too, giving an excellent example of the usefulness of formal analogies in the description of physical systems. In this paper, we develop a quantitative model for the magnetic interactions underlying the so-called Geomag[TM]…
Novel optical scanning cryptography using Fresnel telescope imaging.
Yan, Aimin; Sun, Jianfeng; Hu, Zhijuan; Zhang, Jingtao; Liu, Liren
2015-07-13
We propose a new method called modified optical scanning cryptography using Fresnel telescope imaging technique for encryption and decryption of remote objects. An image or object can be optically encrypted on the fly by Fresnel telescope scanning system together with an encryption key. For image decryption, the encrypted signals are received and processed with an optical coherent heterodyne detection system. The proposed method has strong performance through use of secure Fresnel telescope scanning with orthogonal polarized beams and efficient all-optical information processing. The validity of the proposed method is demonstrated by numerical simulations and experimental results.
NASA Astrophysics Data System (ADS)
Thapa, Damber; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2015-12-01
In this paper, we propose a speckle noise reduction method for spectral-domain optical coherence tomography (SD-OCT) images called multi-frame weighted nuclear norm minimization (MWNNM). This method is a direct extension of weighted nuclear norm minimization (WNNM) in the multi-frame framework since an adequately denoised image could not be achieved with single-frame denoising methods. The MWNNM method exploits multiple B-scans collected from a small area of a SD-OCT volumetric image, and then denoises and averages them together to obtain a high signal-to-noise ratio B-scan. The results show that the image quality metrics obtained by denoising and averaging only five nearby B-scans with MWNNM method is considerably better than those of the average image obtained by registering and averaging 40 azimuthally repeated B-scans.
Wang, Yan; Ma, Guangkai; An, Le; Shi, Feng; Zhang, Pei; Lalush, David S.; Wu, Xi; Pu, Yifei; Zhou, Jiliu; Shen, Dinggang
2017-01-01
Objective To obtain high-quality positron emission tomography (PET) image with low-dose tracer injection, this study attempts to predict the standard-dose PET (S-PET) image from both its low-dose PET (L-PET) counterpart and corresponding magnetic resonance imaging (MRI). Methods It was achieved by patch-based sparse representation (SR), using the training samples with a complete set of MRI, L-PET and S-PET modalities for dictionary construction. However, the number of training samples with complete modalities is often limited. In practice, many samples generally have incomplete modalities (i.e., with one or two missing modalities) that thus cannot be used in the prediction process. In light of this, we develop a semi-supervised tripled dictionary learning (SSTDL) method for S-PET image prediction, which can utilize not only the samples with complete modalities (called complete samples) but also the samples with incomplete modalities (called incomplete samples), to take advantage of the large number of available training samples and thus further improve the prediction performance. Results Validation was done on a real human brain dataset consisting of 18 subjects, and the results show that our method is superior to the SR and other baseline methods. Conclusion This work proposed a new S-PET prediction method, which can significantly improve the PET image quality with low-dose injection. Significance The proposed method is favorable in clinical application since it can decrease the potential radiation risk for patients. PMID:27187939
Video based object representation and classification using multiple covariance matrices.
Zhang, Yurong; Liu, Quan
2017-01-01
Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.
Turboprop: improved PROPELLER imaging.
Pipe, James G; Zwart, Nicholas
2006-02-01
A variant of periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) MRI, called turboprop, is introduced. This method employs an oscillating readout gradient during each spin echo of the echo train to collect more lines of data per echo train, which reduces the minimum scan time, motion-related artifact, and specific absorption rate (SAR) while increasing sampling efficiency. It can be applied to conventional fast spin-echo (FSE) imaging; however, this article emphasizes its application in diffusion-weighted imaging (DWI). The method is described and compared with conventional PROPELLER imaging, and clinical images collected with this PROPELLER variant are shown. Copyright 2006 Wiley-Liss, Inc.
Building dynamic population graph for accurate correspondence detection.
Du, Shaoyi; Guo, Yanrong; Sanroma, Gerard; Ni, Dong; Wu, Guorong; Shen, Dinggang
2015-12-01
In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph. Copyright © 2015 Elsevier B.V. All rights reserved.
Multiresolution generalized N dimension PCA for ultrasound image denoising
2014-01-01
Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917
The mass remote sensing image data management based on Oracle InterMedia
NASA Astrophysics Data System (ADS)
Zhao, Xi'an; Shi, Shaowei
2013-07-01
With the development of remote sensing technology, getting the image data more and more, how to apply and manage the mass image data safely and efficiently has become an urgent problem to be solved. According to the methods and characteristics of the mass remote sensing image data management and application, this paper puts forward to a new method that takes Oracle Call Interface and Oracle InterMedia to store the image data, and then takes this component to realize the system function modules. Finally, it successfully takes the VC and Oracle InterMedia component to realize the image data storage and management.
Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.
Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo
Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.
PIRIA: a general tool for indexing, search, and retrieval of multimedia content
NASA Astrophysics Data System (ADS)
Joint, Magali; Moellic, Pierre-Alain; Hede, P.; Adam, P.
2004-05-01
The Internet is a continuously expanding source of multimedia content and information. There are many products in development to search, retrieve, and understand multimedia content. But most of the current image search/retrieval engines, rely on a image database manually pre-indexed with keywords. Computers are still powerless to understand the semantic meaning of still or animated image content. Piria (Program for the Indexing and Research of Images by Affinity), the search engine we have developed brings this possibility closer to reality. Piria is a novel search engine that uses the query by example method. A user query is submitted to the system, which then returns a list of images ranked by similarity, obtained by a metric distance that operates on every indexed image signature. These indexed images are compared according to several different classifiers, not only Keywords, but also Form, Color and Texture, taking into account geometric transformations and variance like rotation, symmetry, mirroring, etc. Form - Edges extracted by an efficient segmentation algorithm. Color - Histogram, semantic color segmentation and spatial color relationship. Texture - Texture wavelets and local edge patterns. If required, Piria is also able to fuse results from multiple classifiers with a new classification of index categories: Single Indexer Single Call (SISC), Single Indexer Multiple Call (SIMC), Multiple Indexers Single Call (MISC) or Multiple Indexers Multiple Call (MIMC). Commercial and industrial applications will be explored and discussed as well as current and future development.
Guided SAR image despeckling with probabilistic non local weights
NASA Astrophysics Data System (ADS)
Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny
2017-12-01
SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.
The New Maia Detector System: Methods For High Definition Trace Element Imaging Of Natural Material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, C. G.; School of Physics, University of Melbourne, Parkville VIC; CODES Centre of Excellence, University of Tasmania, Hobart TAS
2010-04-06
Motivated by the need for megapixel high definition trace element imaging to capture intricate detail in natural material, together with faster acquisition and improved counting statistics in elemental imaging, a large energy-dispersive detector array called Maia has been developed by CSIRO and BNL for SXRF imaging on the XFM beamline at the Australian Synchrotron. A 96 detector prototype demonstrated the capacity of the system for real-time deconvolution of complex spectral data using an embedded implementation of the Dynamic Analysis method and acquiring highly detailed images up to 77 M pixels spanning large areas of complex mineral sample sections.
The New Maia Detector System: Methods For High Definition Trace Element Imaging Of Natural Material
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ryan, C.G.; Siddons, D.P.; Kirkham, R.
2010-05-25
Motivated by the need for megapixel high definition trace element imaging to capture intricate detail in natural material, together with faster acquisition and improved counting statistics in elemental imaging, a large energy-dispersive detector array called Maia has been developed by CSIRO and BNL for SXRF imaging on the XFM beamline at the Australian Synchrotron. A 96 detector prototype demonstrated the capacity of the system for real-time deconvolution of complex spectral data using an embedded implementation of the Dynamic Analysis method and acquiring highly detailed images up to 77 M pixels spanning large areas of complex mineral sample sections.
A New Approach to Image Fusion Based on Cokriging
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; LeMoigne, Jacqueline; Mount, David M.; Morisette, Jeffrey T.
2005-01-01
We consider the image fusion problem involving remotely sensed data. We introduce cokriging as a method to perform fusion. We investigate the advantages of fusing Hyperion with ALI. The evaluation is performed by comparing the classification of the fused data with that of input images and by calculating well-chosen quantitative fusion quality metrics. We consider the Invasive Species Forecasting System (ISFS) project as our fusion application. The fusion of ALI with Hyperion data is studies using PCA and wavelet-based fusion. We then propose utilizing a geostatistical based interpolation method called cokriging as a new approach for image fusion.
Comparative Study of Speckle Filtering Methods in PolSAR Radar Images
NASA Astrophysics Data System (ADS)
Boutarfa, S.; Bouchemakh, L.; Smara, Y.
2015-04-01
Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.
Siegel, Nisan; Brooker, Gary
2014-09-22
FINCH holographic fluorescence microscopy creates super-resolved images with enhanced depth of focus. Addition of a Nipkow disk real-time confocal image scanner is shown to reduce the FINCH depth of focus while improving transverse confocal resolution in a combined method called "CINCH".
NASA Astrophysics Data System (ADS)
Silva, Ricardo Petri; Naozuka, Gustavo Taiji; Mastelini, Saulo Martiello; Felinto, Alan Salvany
2018-01-01
The incidence of luminous reflections (LR) in captured images can interfere with the color of the affected regions. These regions tend to oversaturate, becoming whitish and, consequently, losing the original color information of the scene. Decision processes that employ images acquired from digital cameras can be impaired by the LR incidence. Such applications include real-time video surgeries, facial, and ocular recognition. This work proposes an algorithm called contrast enhancement of potential LR regions, which is a preprocessing to increase the contrast of potential LR regions, in order to improve the performance of automatic LR detectors. In addition, three automatic detectors were compared with and without the employment of our preprocessing method. The first one is a technique already consolidated in the literature called the Chang-Tseng threshold. We propose two automatic detectors called adapted histogram peak and global threshold. We employed four performance metrics to evaluate the detectors, namely, accuracy, precision, exactitude, and root mean square error. The exactitude metric is developed by this work. Thus, a manually defined reference model was created. The global threshold detector combined with our preprocessing method presented the best results, with an average exactitude rate of 82.47%.
Differential Binary Encoding Method for Calibrating Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro-Galilea, José Luis; Gardel, Alfredo; Espinosa, Felipe; Bravo, Ignacio; Cano, Ángel
2012-01-01
Image transmission using incoherent optical fiber bundles (IOFBs) requires prior calibration to obtain the spatial in-out fiber correspondence necessary to reconstruct the image captured by the pseudo-sensor. This information is recorded in a Look-Up Table called the Reconstruction Table (RT), used later for reordering the fiber positions and reconstructing the original image. This paper presents a very fast method based on image-scanning using spaces encoded by a weighted binary code to obtain the in-out correspondence. The results demonstrate that this technique yields a remarkable reduction in processing time and the image reconstruction quality is very good compared to previous techniques based on spot or line scanning, for example. PMID:22666023
Multilinear Graph Embedding: Representation and Regularization for Images.
Chen, Yi-Lei; Hsu, Chiou-Ting
2014-02-01
Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.
Malware analysis using visualized image matrices.
Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu
2014-01-01
This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.
Developing tools for digital radar image data evaluation
NASA Technical Reports Server (NTRS)
Domik, G.; Leberl, F.; Raggam, J.
1986-01-01
The refinement of radar image analysis methods has led to a need for a systems approach to radar image processing software. Developments stimulated through satellite radar are combined with standard image processing techniques to create a user environment to manipulate and analyze airborne and satellite radar images. One aim is to create radar products for the user from the original data to enhance the ease of understanding the contents. The results are called secondary image products and derive from the original digital images. Another aim is to support interactive SAR image analysis. Software methods permit use of a digital height model to create ortho images, synthetic images, stereo-ortho images, radar maps or color combinations of different component products. Efforts are ongoing to integrate individual tools into a combined hardware/software environment for interactive radar image analysis.
Implementation of sobel method to detect the seed rubber plant leaves
NASA Astrophysics Data System (ADS)
Suyanto; Munte, J.
2018-03-01
This research was conducted to develop a system that can identify and recognize the type of rubber tree based on the pattern of leaves of the plant. The steps research are started with the identification of the image data acquisition, image processing, image edge detection and identification method template matching. Edge detection is using Sobel edge detection. Pattern recognition would detect image as input and compared with other images in a database called templates. Experiments carried out in one phase, identification of the leaf edge, using a rubber plant leaf image 14 are superior and 5 for each type of test images (clones) of the plant. From the experimental results obtained by the recognition rate of 91.79%.
Computational and design methods for advanced imaging
NASA Astrophysics Data System (ADS)
Birch, Gabriel C.
This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.
How to Find a Tiny Wobble in a Zippy Star
2009-05-28
This image shows the star VB 10 moving across the sky over a period of nine years. Astronomers nabbed a planet circling this star using a method called astrometry -- the first successful application of the method to planet hunting.
Siegel, Nisan; Brooker, Gary
2014-01-01
FINCH holographic fluorescence microscopy creates super-resolved images with enhanced depth of focus. Addition of a Nipkow disk real-time confocal image scanner is shown to reduce the FINCH depth of focus while improving transverse confocal resolution in a combined method called “CINCH”. PMID:25321701
Restoration of out-of-focus images based on circle of confusion estimate
NASA Astrophysics Data System (ADS)
Vivirito, Paolo; Battiato, Sebastiano; Curti, Salvatore; La Cascia, M.; Pirrone, Roberto
2002-11-01
In this paper a new method for a fast out-of-focus blur estimation and restoration is proposed. It is suitable for CFA (Color Filter Array) images acquired by typical CCD/CMOS sensor. The method is based on the analysis of a single image and consists of two steps: 1) out-of-focus blur estimation via Bayer pattern analysis; 2) image restoration. Blur estimation is based on a block-wise edge detection technique. This edge detection is carried out on the green pixels of the CFA sensor image also called Bayer pattern. Once the blur level has been estimated the image is restored through the application of a new inverse filtering technique. This algorithm gives sharp images reducing ringing and crisping artifact, involving wider region of frequency. Experimental results show the effectiveness of the method, both in subjective and numerical way, by comparison with other techniques found in literature.
MOSAIC: Software for creating mosaics from collections of images
NASA Technical Reports Server (NTRS)
Varosi, F.; Gezari, D. Y.
1992-01-01
We have developed a powerful, versatile image processing and analysis software package called MOSAIC, designed specifically for the manipulation of digital astronomical image data obtained with (but not limited to) two-dimensional array detectors. The software package is implemented using the Interactive Data Language (IDL), and incorporates new methods for processing, calibration, analysis, and visualization of astronomical image data, stressing effective methods for the creation of mosaic images from collections of individual exposures, while at the same time preserving the photometric integrity of the original data. Since IDL is available on many computers, the MOSAIC software runs on most UNIX and VAX workstations with the X-Windows or Sun View graphics interface.
Biological imaging in radiation therapy: role of positron emission tomography.
Nestle, Ursula; Weber, Wolfgang; Hentschel, Michael; Grosu, Anca-Ligia
2009-01-07
In radiation therapy (RT), staging, treatment planning, monitoring and evaluation of response are traditionally based on computed tomography (CT) and magnetic resonance imaging (MRI). These radiological investigations have the significant advantage to show the anatomy with a high resolution, being also called anatomical imaging. In recent years, so called biological imaging methods which visualize metabolic pathways have been developed. These methods offer complementary imaging of various aspects of tumour biology. To date, the most prominent biological imaging system in use is positron emission tomography (PET), whose diagnostic properties have clinically been evaluated for years. The aim of this review is to discuss the valences and implications of PET in RT. We will focus our evaluation on the following topics: the role of biological imaging for tumour tissue detection/delineation of the gross tumour volume (GTV) and for the visualization of heterogeneous tumour biology. We will discuss the role of fluorodeoxyglucose-PET in lung and head and neck cancer and the impact of amino acids (AA)-PET in target volume delineation of brain gliomas. Furthermore, we summarize the data of the literature about tumour hypoxia and proliferation visualized by PET. We conclude that, regarding treatment planning in radiotherapy, PET offers advantages in terms of tumour delineation and the description of biological processes. However, to define the real impact of biological imaging on clinical outcome after radiotherapy, further experimental, clinical and cost/benefit analyses are required.
TOPICAL REVIEW: Biological imaging in radiation therapy: role of positron emission tomography
NASA Astrophysics Data System (ADS)
Nestle, Ursula; Weber, Wolfgang; Hentschel, Michael; Grosu, Anca-Ligia
2009-01-01
In radiation therapy (RT), staging, treatment planning, monitoring and evaluation of response are traditionally based on computed tomography (CT) and magnetic resonance imaging (MRI). These radiological investigations have the significant advantage to show the anatomy with a high resolution, being also called anatomical imaging. In recent years, so called biological imaging methods which visualize metabolic pathways have been developed. These methods offer complementary imaging of various aspects of tumour biology. To date, the most prominent biological imaging system in use is positron emission tomography (PET), whose diagnostic properties have clinically been evaluated for years. The aim of this review is to discuss the valences and implications of PET in RT. We will focus our evaluation on the following topics: the role of biological imaging for tumour tissue detection/delineation of the gross tumour volume (GTV) and for the visualization of heterogeneous tumour biology. We will discuss the role of fluorodeoxyglucose-PET in lung and head and neck cancer and the impact of amino acids (AA)-PET in target volume delineation of brain gliomas. Furthermore, we summarize the data of the literature about tumour hypoxia and proliferation visualized by PET. We conclude that, regarding treatment planning in radiotherapy, PET offers advantages in terms of tumour delineation and the description of biological processes. However, to define the real impact of biological imaging on clinical outcome after radiotherapy, further experimental, clinical and cost/benefit analyses are required.
NASA Astrophysics Data System (ADS)
Makita, Shuichi; Kurokawa, Kazuhiro; Hong, Young-Joo; Li, En; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
A new optical coherence angiography (OCA) method, called correlation mapping OCA (cmOCA), is presented by using the SNR-corrected complex correlation. An SNR-correction theory for the complex correlation calculation is presented. The method also integrates a motion-artifact-removal method for the sample motion induced decorrelation artifact. The theory is further extended to compute more reliable correlation by using multi- channel OCT systems, such as Jones-matrix OCT. The high contrast vasculature imaging of in vivo human posterior eye has been obtained. Composite imaging of cmOCA and degree of polarization uniformity indicates abnormalities of vasculature and pigmented tissues simultaneously.
Siegel, Nisan; Storrie, Brian; Bruce, Marc; Brooker, Gary
2015-02-07
FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called "CINCH". An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution.
Cruz-Roa, Angel; Díaz, Gloria; Romero, Eduardo; González, Fabio A.
2011-01-01
Histopathological images are an important resource for clinical diagnosis and biomedical research. From an image understanding point of view, the automatic annotation of these images is a challenging problem. This paper presents a new method for automatic histopathological image annotation based on three complementary strategies, first, a part-based image representation, called the bag of features, which takes advantage of the natural redundancy of histopathological images for capturing the fundamental patterns of biological structures, second, a latent topic model, based on non-negative matrix factorization, which captures the high-level visual patterns hidden in the image, and, third, a probabilistic annotation model that links visual appearance of morphological and architectural features associated to 10 histopathological image annotations. The method was evaluated using 1,604 annotated images of skin tissues, which included normal and pathological architectural and morphological features, obtaining a recall of 74% and a precision of 50%, which improved a baseline annotation method based on support vector machines in a 64% and 24%, respectively. PMID:22811960
Speckle reduction of OCT images using an adaptive cluster-based filtering
NASA Astrophysics Data System (ADS)
Adabi, Saba; Rashedi, Elaheh; Conforto, Silvia; Mehregan, Darius; Xu, Qiuyun; Nasiriavanaki, Mohammadreza
2017-02-01
Optical coherence tomography (OCT) has become a favorable device in the dermatology discipline due to its moderate resolution and penetration depth. OCT images however contain grainy pattern, called speckle, due to the broadband source that has been used in the configuration of OCT. So far, a variety of filtering techniques is introduced to reduce speckle in OCT images. Most of these methods are generic and can be applied to OCT images of different tissues. In this paper, we present a method for speckle reduction of OCT skin images. Considering the architectural structure of skin layers, it seems that a skin image can benefit from being segmented in to differentiable clusters, and being filtered separately in each cluster by using a clustering method and filtering methods such as Wiener. The proposed algorithm was tested on an optical solid phantom with predetermined optical properties. The algorithm was also tested on healthy skin images. The results show that the cluster-based filtering method can reduce the speckle and increase the signal-to-noise ratio and contrast while preserving the edges in the image.
Fingerprint image enhancement by differential hysteresis processing.
Blotta, Eduardo; Moler, Emilce
2004-05-10
A new method to enhance defective fingerprints images through image digital processing tools is presented in this work. When the fingerprints have been taken without any care, blurred and in some cases mostly illegible, as in the case presented here, their classification and comparison becomes nearly impossible. A combination of spatial domain filters, including a technique called differential hysteresis processing (DHP), is applied to improve these kind of images. This set of filtering methods proved to be satisfactory in a wide range of cases by uncovering hidden details that helped to identify persons. Dactyloscopy experts from Policia Federal Argentina and the EAAF have validated these results.
Automatic face naming by learning discriminative affinity matrices from weakly labeled images.
Xiao, Shijie; Xu, Dong; Wu, Jianxin
2015-10-01
Given a collection of images, where each image contains several faces and is associated with a few names in the corresponding caption, the goal of face naming is to infer the correct name for each face. In this paper, we propose two new methods to effectively solve this problem by learning two discriminative affinity matrices from these weakly labeled images. We first propose a new method called regularized low-rank representation by effectively utilizing weakly supervised information to learn a low-rank reconstruction coefficient matrix while exploring multiple subspace structures of the data. Specifically, by introducing a specially designed regularizer to the low-rank representation method, we penalize the corresponding reconstruction coefficients related to the situations where a face is reconstructed by using face images from other subjects or by using itself. With the inferred reconstruction coefficient matrix, a discriminative affinity matrix can be obtained. Moreover, we also develop a new distance metric learning method called ambiguously supervised structural metric learning by using weakly supervised information to seek a discriminative distance metric. Hence, another discriminative affinity matrix can be obtained using the similarity matrix (i.e., the kernel matrix) based on the Mahalanobis distances of the data. Observing that these two affinity matrices contain complementary information, we further combine them to obtain a fused affinity matrix, based on which we develop a new iterative scheme to infer the name of each face. Comprehensive experiments demonstrate the effectiveness of our approach.
Group-sparse representation with dictionary learning for medical image denoising and fusion.
Li, Shutao; Yin, Haitao; Fang, Leyuan
2012-12-01
Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.
Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect
Edrei, Eitan; Scarcelli, Giuliano
2016-01-01
Several phenomena have been recently exploited to circumvent scattering and have succeeded in imaging or focusing light through turbid layers. However, the requirement for the turbid medium to be steady during the imaging process remains a fundamental limitation of these methods. Here we introduce an optical imaging modality that overcomes this challenge by taking advantage of the so-called shower-curtain effect, adapted to the spatial-frequency domain via speckle correlography. We present high resolution imaging of objects hidden behind millimeter-thick tissue or dense lens cataracts. We demonstrate our imaging technique to be insensitive to rapid medium movements (> 5 m/s) beyond any biologically-relevant motion. Furthermore, we show this method can be extended to several contrast mechanisms and imaging configurations. PMID:27347498
Variance based joint sparsity reconstruction of synthetic aperture radar data for speckle reduction
NASA Astrophysics Data System (ADS)
Scarnati, Theresa; Gelb, Anne
2018-04-01
In observing multiple synthetic aperture radar (SAR) images of the same scene, it is apparent that the brightness distributions of the images are not smooth, but rather composed of complicated granular patterns of bright and dark spots. Further, these brightness distributions vary from image to image. This salt and pepper like feature of SAR images, called speckle, reduces the contrast in the images and negatively affects texture based image analysis. This investigation uses the variance based joint sparsity reconstruction method for forming SAR images from the multiple SAR images. In addition to reducing speckle, the method has the advantage of being non-parametric, and can therefore be used in a variety of autonomous applications. Numerical examples include reconstructions of both simulated phase history data that result in speckled images as well as the images from the MSTAR T-72 database.
Fast High Resolution Volume Carving for 3D Plant Shoot Reconstruction
Scharr, Hanno; Briese, Christoph; Embgenbroich, Patrick; Fischbach, Andreas; Fiorani, Fabio; Müller-Linow, Mark
2017-01-01
Volume carving is a well established method for visual hull reconstruction and has been successfully applied in plant phenotyping, especially for 3d reconstruction of small plants and seeds. When imaging larger plants at still relatively high spatial resolution (≤1 mm), well known implementations become slow or have prohibitively large memory needs. Here we present and evaluate a computationally efficient algorithm for volume carving, allowing e.g., 3D reconstruction of plant shoots. It combines a well-known multi-grid representation called “Octree” with an efficient image region integration scheme called “Integral image.” Speedup with respect to less efficient octree implementations is about 2 orders of magnitude, due to the introduced refinement strategy “Mark and refine.” Speedup is about a factor 1.6 compared to a highly optimized GPU implementation using equidistant voxel grids, even without using any parallelization. We demonstrate the application of this method for trait derivation of banana and maize plants. PMID:29033961
Level set method for image segmentation based on moment competition
NASA Astrophysics Data System (ADS)
Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai
2015-05-01
We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.
Malware Analysis Using Visualized Image Matrices
Im, Eul Gyu
2014-01-01
This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively. PMID:25133202
Photoacoustic-Based Multimodal Nanoprobes: from Constructing to Biological Applications.
Gao, Duyang; Yuan, Zhen
2017-01-01
Multimodal nanoprobes have attracted intensive attentions since they can integrate various imaging modalities to obtain complementary merits of single modality. Meanwhile, recent interest in laser-induced photoacoustic imaging is rapidly growing due to its unique advantages in visualizing tissue structure and function with high spatial resolution and satisfactory imaging depth. In this review, we summarize multimodal nanoprobes involving photoacoustic imaging. In particular, we focus on the method to construct multimodal nanoprobes. We have divided the synthetic methods into two types. First, we call it "one for all" concept, which involves intrinsic properties of the element in a single particle. Second, "all in one" concept, which means integrating different functional blocks in one particle. Then, we simply introduce the applications of the multifunctional nanoprobes for in vivo imaging and imaging-guided tumor therapy. At last, we discuss the advantages and disadvantages of the present methods to construct the multimodal nanoprobes and share our viewpoints in this area.
Shift-Invariant Image Reconstruction of Speckle-Degraded Images Using Bispectrum Estimation
1990-05-01
process with the requisite negative exponential pelf. I call this model the Negative Exponential Model ( NENI ). The NENI flowchart is seen in Figure 6...Figure ]3d-g. Statistical Histograms and Phase for the RPj NG EXP FDF MULT METHOD FILuteC 14a. Truth Object Speckled Via the NENI HISTOGRAM OF SPECKLE
Document image database indexing with pictorial dictionary
NASA Astrophysics Data System (ADS)
Akbari, Mohammad; Azimi, Reza
2010-02-01
In this paper we introduce a new approach for information retrieval from Persian document image database without using Optical Character Recognition (OCR).At first an attribute called subword upper contour label is defined then, a pictorial dictionary is constructed based on this attribute for the subwords. By this approach we address two issues in document image retrieval: keyword spotting and retrieval according to the document similarities. The proposed methods have been evaluated on a Persian document image database. The results have proved the ability of this approach in document image information retrieval.
Model-based restoration using light vein for range-gated imaging systems.
Wang, Canjin; Sun, Tao; Wang, Tingfeng; Wang, Rui; Guo, Jin; Tian, Yuzhen
2016-09-10
The images captured by an airborne range-gated imaging system are degraded by many factors, such as light scattering, noise, defocus of the optical system, atmospheric disturbances, platform vibrations, and so on. The characteristics of low illumination, few details, and high noise make the state-of-the-art restoration method fail. In this paper, we present a restoration method especially for range-gated imaging systems. The degradation process is divided into two parts: the static part and the dynamic part. For the static part, we establish the physical model of the imaging system according to the laser transmission theory, and estimate the static point spread function (PSF). For the dynamic part, a so-called light vein feature extraction method is presented to estimate the fuzzy parameter of the atmospheric disturbance and platform movement, which make contributions to the dynamic PSF. Finally, combined with the static and dynamic PSF, an iterative updating framework is used to restore the image. Compared with the state-of-the-art methods, the proposed method can effectively suppress ringing artifacts and achieve better performance in a range-gated imaging system.
Ghodrati, Sajjad; Kandi, Saeideh Gorji; Mohseni, Mohsen
2018-06-01
In recent years, various surface roughness measurement methods have been proposed as alternatives to the commonly used stylus profilometry, which is a low-speed, destructive, expensive but precise method. In this study, a novel method, called "image profilometry," has been introduced for nondestructive, fast, and low-cost surface roughness measurement of randomly rough metallic samples based on image processing and machine vision. The impacts of influential parameters such as image resolution and filtering approach for elimination of the long wavelength surface undulations on the accuracy of the image profilometry results have been comprehensively investigated. Ten surface roughness parameters were measured for the samples using both the stylus and image profilometry. Based on the results, the best image resolution was 800 dpi, and the most practical filtering method was Gaussian convolution+cutoff. In these conditions, the best and worst correlation coefficients (R 2 ) between the stylus and image profilometry results were 0.9892 and 0.9313, respectively. Our results indicated that the image profilometry predicted the stylus profilometry results with high accuracy. Consequently, it could be a viable alternative to the stylus profilometry, particularly in online applications.
Light Microscopy at Maximal Precision
NASA Astrophysics Data System (ADS)
Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.
2017-10-01
Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.
Comparative study on the performance of textural image features for active contour segmentation.
Moraru, Luminita; Moldovanu, Simona
2012-07-01
We present a computerized method for the semi-automatic detection of contours in ultrasound images. The novelty of our study is the introduction of a fast and efficient image function relating to parametric active contour models. This new function is a combination of the gray-level information and first-order statistical features, called standard deviation parameters. In a comprehensive study, the developed algorithm and the efficiency of segmentation were first tested for synthetic images. Tests were also performed on breast and liver ultrasound images. The proposed method was compared with the watershed approach to show its efficiency. The performance of the segmentation was estimated using the area error rate. Using the standard deviation textural feature and a 5×5 kernel, our curve evolution was able to produce results close to the minimal area error rate (namely 8.88% for breast images and 10.82% for liver images). The image resolution was evaluated using the contrast-to-gradient method. The experiments showed promising segmentation results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Subok; Jennings, Robert; Liu Haimo
Purpose: For the last few years, development and optimization of three-dimensional (3D) x-ray breast imaging systems, such as digital breast tomosynthesis (DBT) and computed tomography, have drawn much attention from the medical imaging community, either academia or industry. However, there is still much room for understanding how to best optimize and evaluate the devices over a large space of many different system parameters and geometries. Current evaluation methods, which work well for 2D systems, do not incorporate the depth information from the 3D imaging systems. Therefore, it is critical to develop a statistically sound evaluation method to investigate the usefulnessmore » of inclusion of depth and background-variability information into the assessment and optimization of the 3D systems. Methods: In this paper, we present a mathematical framework for a statistical assessment of planar and 3D x-ray breast imaging systems. Our method is based on statistical decision theory, in particular, making use of the ideal linear observer called the Hotelling observer. We also present a physical phantom that consists of spheres of different sizes and materials for producing an ensemble of randomly varying backgrounds to be imaged for a given patient class. Lastly, we demonstrate our evaluation method in comparing laboratory mammography and three-angle DBT systems for signal detection tasks using the phantom's projection data. We compare the variable phantom case to that of a phantom of the same dimensions filled with water, which we call the uniform phantom, based on the performance of the Hotelling observer as a function of signal size and intensity. Results: Detectability trends calculated using the variable and uniform phantom methods are different from each other for both mammography and DBT systems. Conclusions: Our results indicate that measuring the system's detection performance with consideration of background variability may lead to differences in system performance estimates and comparisons. For the assessment of 3D systems, to accurately determine trade offs between image quality and radiation dose, it is critical to incorporate randomness arising from the imaging chain including background variability into system performance calculations.« less
... through the other side. Another imaging method called nuclear medicine uses compounds that emit radiation, which can ... potentially riskier to not get the scan or nuclear medicine procedure than to get it.” Sgouros and ...
Advanced Imaging Methods for Long-Baseline Optical Interferometry
NASA Astrophysics Data System (ADS)
Le Besnerais, G.; Lacour, S.; Mugnier, L. M.; Thiebaut, E.; Perrin, G.; Meimon, S.
2008-11-01
We address the data processing methods needed for imaging with a long baseline optical interferometer. We first describe parametric reconstruction approaches and adopt a general formulation of nonparametric image reconstruction as the solution of a constrained optimization problem. Within this framework, we present two recent reconstruction methods, Mira and Wisard, representative of the two generic approaches for dealing with the missing phase information. Mira is based on an implicit approach and a direct optimization of a Bayesian criterion while Wisard adopts a self-calibration approach and an alternate minimization scheme inspired from radio-astronomy. Both methods can handle various regularization criteria. We review commonly used regularization terms and introduce an original quadratic regularization called ldquosoft support constraintrdquo that favors the object compactness. It yields images of quality comparable to nonquadratic regularizations on the synthetic data we have processed. We then perform image reconstructions, both parametric and nonparametric, on astronomical data from the IOTA interferometer, and discuss the respective roles of parametric and nonparametric approaches for optical interferometric imaging.
Siegel, Nisan; Storrie, Brian; Bruce, Marc
2016-01-01
FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called “CINCH”. An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution. PMID:26839443
Simultaneous transmission for an encrypted image and a double random-phase encryption key
NASA Astrophysics Data System (ADS)
Yuan, Sheng; Zhou, Xin; Li, Da-Hai; Zhou, Ding-Fu
2007-06-01
We propose a method to simultaneously transmit double random-phase encryption key and an encrypted image by making use of the fact that an acceptable decryption result can be obtained when only partial data of the encrypted image have been taken in the decryption process. First, the original image data are encoded as an encrypted image by a double random-phase encryption technique. Second, a double random-phase encryption key is encoded as an encoded key by the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. Then the amplitude of the encrypted image is modulated by the encoded key to form what we call an encoded image. Finally, the encoded image that carries both the encrypted image and the encoded key is delivered to the receiver. Based on such a method, the receiver can have an acceptable result and secure transmission can be guaranteed by the RSA cipher system.
Simultaneous transmission for an encrypted image and a double random-phase encryption key.
Yuan, Sheng; Zhou, Xin; Li, Da-hai; Zhou, Ding-fu
2007-06-20
We propose a method to simultaneously transmit double random-phase encryption key and an encrypted image by making use of the fact that an acceptable decryption result can be obtained when only partial data of the encrypted image have been taken in the decryption process. First, the original image data are encoded as an encrypted image by a double random-phase encryption technique. Second, a double random-phase encryption key is encoded as an encoded key by the Rivest-Shamir-Adelman (RSA) public-key encryption algorithm. Then the amplitude of the encrypted image is modulated by the encoded key to form what we call an encoded image. Finally, the encoded image that carries both the encrypted image and the encoded key is delivered to the receiver. Based on such a method, the receiver can have an acceptable result and secure transmission can be guaranteed by the RSA cipher system.
NASA Astrophysics Data System (ADS)
Mobasheri, Mohammad Reza; Ghamary-Asl, Mohsen
2011-12-01
Imaging through hyperspectral technology is a powerful tool that can be used to spectrally identify and spatially map materials based on their specific absorption characteristics in electromagnetic spectrum. A robust method called Tetracorder has shown its effectiveness at material identification and mapping, using a set of algorithms within an expert system decision-making framework. In this study, using some stages of Tetracorder, a technique called classification by diagnosing all absorption features (CDAF) is introduced. This technique enables one to assign a class to the most abundant mineral in each pixel with high accuracy. The technique is based on the derivation of information from reflectance spectra of the image. This can be done through extraction of spectral absorption features of any minerals from their respected laboratory-measured reflectance spectra, and comparing it with those extracted from the pixels in the image. The CDAF technique has been executed on the AVIRIS image where the results show an overall accuracy of better than 96%.
Choi, Hailey H; Clark, Jennifer; Jay, Ann K; Filice, Ross W
2018-02-01
Feedback is an essential part of medical training, where trainees are provided with information regarding their performance and further directions for improvement. In diagnostic radiology, feedback entails a detailed review of the differences between the residents' preliminary interpretation and the attendings' final interpretation of imaging studies. While the on-call experience of independently interpreting complex cases is important to resident education, the more traditional synchronous "read-out" or joint review is impossible due to multiple constraints. Without an efficient method to compare reports, grade discrepancies, convey salient teaching points, and view images, valuable lessons in image interpretation and report construction are lost. We developed a streamlined web-based system, including report comparison and image viewing, to minimize barriers in asynchronous communication between attending radiologists and on-call residents. Our system provides real-time, end-to-end delivery of case-specific and user-specific feedback in a streamlined, easy-to-view format. We assessed quality improvement subjectively through surveys and objectively through participation metrics. Our web-based feedback system improved user satisfaction for both attending and resident radiologists, and increased attending participation, particularly with regards to cases where substantive discrepancies were identified.
Extraction of composite visual objects from audiovisual materials
NASA Astrophysics Data System (ADS)
Durand, Gwenael; Thienot, Cedric; Faudemay, Pascal
1999-08-01
An effective analysis of Visual Objects appearing in still images and video frames is required in order to offer fine grain access to multimedia and audiovisual contents. In previous papers, we showed how our method for segmenting still images into visual objects could improve content-based image retrieval and video analysis methods. Visual Objects are used in particular for extracting semantic knowledge about the contents. However, low-level segmentation methods for still images are not likely to extract a complex object as a whole but instead as a set of several sub-objects. For example, a person would be segmented into three visual objects: a face, hair, and a body. In this paper, we introduce the concept of Composite Visual Object. Such an object is hierarchically composed of sub-objects called Component Objects.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-03-16
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.
Image quality improvement in cone-beam CT using the super-resolution technique.
Oyama, Asuka; Kumagai, Shinobu; Arai, Norikazu; Takata, Takeshi; Saikawa, Yusuke; Shiraishi, Kenshiro; Kobayashi, Takenori; Kotoku, Jun'ichi
2018-04-05
This study was conducted to improve cone-beam computed tomography (CBCT) image quality using the super-resolution technique, a method of inferring a high-resolution image from a low-resolution image. This technique is used with two matrices, so-called dictionaries, constructed respectively from high-resolution and low-resolution image bases. For this study, a CBCT image, as a low-resolution image, is represented as a linear combination of atoms, the image bases in the low-resolution dictionary. The corresponding super-resolution image was inferred by multiplying the coefficients and the high-resolution dictionary atoms extracted from planning CT images. To evaluate the proposed method, we computed the root mean square error (RMSE) and structural similarity (SSIM). The resulting RMSE and SSIM between the super-resolution images and the planning CT images were, respectively, as much as 0.81 and 1.29 times better than those obtained without using the super-resolution technique. We used super-resolution technique to improve the CBCT image quality.
METHODS DEVELOPMENT FOR THE ANALYSIS OF CHIRAL PESTICIDES
Chiral compounds exist as a pair of nonsuperimposable mirror images called enantiomers. Enantiomers have identical physical-chemical properties, but their interactions with other chiral molecules, toxicity, biodegradation, and fate are often different. Many pharmaceutical com...
In vivo optical imaging and dynamic contrast methods for biomedical research
Hillman, Elizabeth M. C.; Amoozegar, Cyrus B.; Wang, Tracy; McCaslin, Addason F. H.; Bouchard, Matthew B.; Mansfield, James; Levenson, Richard M.
2011-01-01
This paper provides an overview of optical imaging methods commonly applied to basic research applications. Optical imaging is well suited for non-clinical use, since it can exploit an enormous range of endogenous and exogenous forms of contrast that provide information about the structure and function of tissues ranging from single cells to entire organisms. An additional benefit of optical imaging that is often under-exploited is its ability to acquire data at high speeds; a feature that enables it to not only observe static distributions of contrast, but to probe and characterize dynamic events related to physiology, disease progression and acute interventions in real time. The benefits and limitations of in vivo optical imaging for biomedical research applications are described, followed by a perspective on future applications of optical imaging for basic research centred on a recently introduced real-time imaging technique called dynamic contrast-enhanced small animal molecular imaging (DyCE). PMID:22006910
Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.
Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří
2017-11-10
We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.
Development of image mappers for hyperspectral biomedical imaging applications
Kester, Robert T.; Gao, Liang; Tkaczyk, Tomasz S.
2010-01-01
A new design and fabrication method is presented for creating large-format (>100 mirror facets) image mappers for a snapshot hyperspectral biomedical imaging system called an image mapping spectrometer (IMS). To verify this approach a 250 facet image mapper with 25 multiple-tilt angles is designed for a compact IMS that groups the 25 subpupils in a 5 × 5 matrix residing within a single collecting objective's pupil. The image mapper is fabricated by precision diamond raster fly cutting using surface-shaped tools. The individual mirror facets have minimal edge eating, tilt errors of <1 mrad, and an average roughness of 5.4 nm. PMID:20357875
Diffraction enhance x-ray imaging for quantitative phase contrast studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agrawal, A. K.; Singh, B., E-mail: balwants@rrcat.gov.in; Kashyap, Y. S.
2016-05-23
Conventional X-ray imaging based on absorption contrast permits limited visibility of feature having small density and thickness variations. For imaging of weakly absorbing material or materials possessing similar densities, a novel phase contrast imaging techniques called diffraction enhanced imaging has been designed and developed at imaging beamline Indus-2 RRCAT Indore. The technique provides improved visibility of the interfaces and show high contrast in the image forsmall density or thickness gradients in the bulk. This paper presents basic principle, instrumentation and analysis methods for this technique. Initial results of quantitative phase retrieval carried out on various samples have also been presented.
Yi, Faliu; Moon, Inkyu; Javidi, Bahram
2017-10-01
In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm.
Yi, Faliu; Moon, Inkyu; Javidi, Bahram
2017-01-01
In this paper, we present two models for automatically extracting red blood cells (RBCs) from RBCs holographic images based on a deep learning fully convolutional neural network (FCN) algorithm. The first model, called FCN-1, only uses the FCN algorithm to carry out RBCs prediction, whereas the second model, called FCN-2, combines the FCN approach with the marker-controlled watershed transform segmentation scheme to achieve RBCs extraction. Both models achieve good segmentation accuracy. In addition, the second model has much better performance in terms of cell separation than traditional segmentation methods. In the proposed methods, the RBCs phase images are first numerically reconstructed from RBCs holograms recorded with off-axis digital holographic microscopy. Then, some RBCs phase images are manually segmented and used as training data to fine-tune the FCN. Finally, each pixel in new input RBCs phase images is predicted into either foreground or background using the trained FCN models. The RBCs prediction result from the first model is the final segmentation result, whereas the result from the second model is used as the internal markers of the marker-controlled transform algorithm for further segmentation. Experimental results show that the given schemes can automatically extract RBCs from RBCs phase images and much better RBCs separation results are obtained when the FCN technique is combined with the marker-controlled watershed segmentation algorithm. PMID:29082078
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
Probabilistic atlas and geometric variability estimation to drive tissue segmentation.
Xu, Hao; Thirion, Bertrand; Allassonnière, Stéphanie
2014-09-10
Computerized anatomical atlases play an important role in medical image analysis. While an atlas usually refers to a standard or mean image also called template, which presumably represents well a given population, it is not enough to characterize the observed population in detail. A template image should be learned jointly with the geometric variability of the shapes represented in the observations. These two quantities will in the sequel form the atlas of the corresponding population. The geometric variability is modeled as deformations of the template image so that it fits the observations. In this paper, we provide a detailed analysis of a new generative statistical model based on dense deformable templates that represents several tissue types observed in medical images. Our atlas contains both an estimation of probability maps of each tissue (called class) and the deformation metric. We use a stochastic algorithm for the estimation of the probabilistic atlas given a dataset. This atlas is then used for atlas-based segmentation method to segment the new images. Experiments are shown on brain T1 MRI datasets. Copyright © 2014 John Wiley & Sons, Ltd.
Visual question answering using hierarchical dynamic memory networks
NASA Astrophysics Data System (ADS)
Shang, Jiayu; Li, Shiren; Duan, Zhikui; Huang, Junwei
2018-04-01
Visual Question Answering (VQA) is one of the most popular research fields in machine learning which aims to let the computer learn to answer natural language questions with images. In this paper, we propose a new method called hierarchical dynamic memory networks (HDMN), which takes both question attention and visual attention into consideration impressed by Co-Attention method, which is the best (or among the best) algorithm for now. Additionally, we use bi-directional LSTMs, which have a better capability to remain more information from the question and image, to replace the old unit so that we can capture information from both past and future sentences to be used. Then we rebuild the hierarchical architecture for not only question attention but also visual attention. What's more, we accelerate the algorithm via a new technic called Batch Normalization which helps the network converge more quickly than other algorithms. The experimental result shows that our model improves the state of the art on the large COCO-QA dataset, compared with other methods.
LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics
NASA Astrophysics Data System (ADS)
Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel
2017-10-01
Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Almasganj, Mohammad; Adabi, Saba; Fatemizadeh, Emad; Xu, Qiuyun; Sadeghi, Hamid; Daveluy, Steven; Nasiriavanaki, Mohammadreza
2017-03-01
Optical Coherence Tomography (OCT) has a great potential to elicit clinically useful information from tissues due to its high axial and transversal resolution. In practice, an OCT setup cannot reach to its theoretical resolution due to imperfections of its components, which make its images blurry. The blurriness is different alongside regions of image; thus, they cannot be modeled by a unique point spread function (PSF). In this paper, we investigate the use of solid phantoms to estimate the PSF of each sub-region of imaging system. We then utilize Lucy-Richardson, Hybr and total variation (TV) based iterative deconvolution methods for mitigating occurred spatially variant blurriness. It is shown that the TV based method will suppress the so-called speckle noise in OCT images better than the two other approaches. The performance of proposed algorithm is tested on various samples, including several skin tissues besides the test image blurred with synthetic PSF-map, demonstrating qualitatively and quantitatively the advantage of TV based deconvolution method using spatially-variant PSF for enhancing image quality.
Color filter array pattern identification using variance of color difference image
NASA Astrophysics Data System (ADS)
Shin, Hyun Jun; Jeon, Jong Ju; Eom, Il Kyu
2017-07-01
A color filter array is placed on the image sensor of a digital camera to acquire color images. Each pixel uses only one color, since the image sensor can measure only one color per pixel. Therefore, empty pixels are filled using an interpolation process called demosaicing. The original and the interpolated pixels have different statistical characteristics. If the image is modified by manipulation or forgery, the color filter array pattern is altered. This pattern change can be a clue for image forgery detection. However, most forgery detection algorithms have the disadvantage of assuming the color filter array pattern. We present an identification method of the color filter array pattern. Initially, the local mean is eliminated to remove the background effect. Subsequently, the color difference block is constructed to emphasize the difference between the original pixel and the interpolated pixel. The variance measure of the color difference image is proposed as a means of estimating the color filter array configuration. The experimental results show that the proposed method is effective in identifying the color filter array pattern. Compared with conventional methods, our method provides superior performance.
Improved Dot Diffusion For Image Halftoning
1999-01-01
The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error diffusion method. The method was recently improved...by optimization of the so-called class matrix so that the resulting halftones are comparable to the error diffused halftones . In this paper we will...first review the dot diffusion method. Previously, 82 class matrices were used for dot diffusion method. A problem with this size of class matrix is
Fourier spatial frequency analysis for image classification: training the training set
NASA Astrophysics Data System (ADS)
Johnson, Timothy H.; Lhamo, Yigah; Shi, Lingyan; Alfano, Robert R.; Russell, Stewart
2016-04-01
The Directional Fourier Spatial Frequencies (DFSF) of a 2D image can identify similarity in spatial patterns within groups of related images. A Support Vector Machine (SVM) can then be used to classify images if the inter-image variance of the FSF in the training set is bounded. However, if variation in FSF increases with training set size, accuracy may decrease as the size of the training set increases. This calls for a method to identify a set of training images from among the originals that can form a vector basis for the entire class. Applying the Cauchy product method we extract the DFSF spectrum from radiographs of osteoporotic bone, and use it as a matched filter set to eliminate noise and image specific frequencies, and demonstrate that selection of a subset of superclassifiers from within a set of training images improves SVM accuracy. Central to this challenge is that the size of the search space can become computationally prohibitive for all but the smallest training sets. We are investigating methods to reduce the search space to identify an optimal subset of basis training images.
An image-based automatic recognition method for the flowering stage of maize
NASA Astrophysics Data System (ADS)
Yu, Zhenghong; Zhou, Huabing; Li, Cuina
2018-03-01
In this paper, we proposed an image-based approach for automatic recognizing the flowering stage of maize. A modified HOG/SVM detection framework is first adopted to detect the ears of maize. Then, we use low-rank matrix recovery technology to precisely extract the ears at pixel level. At last, a new feature called color gradient histogram, as an indicator, is proposed to determine the flowering stage. Comparing experiment has been carried out to testify the validity of our method and the results indicate that our method can meet the demand for practical observation.
Multi data mode method as an alternative way for SPM studies of high relief surfaces
NASA Astrophysics Data System (ADS)
Abdullayeva, S. H.; Molchanov, S. P.; Mamedov, N. T.; Alekperov, S. D.
2006-09-01
In this paper we report the results of our studies of the high relief surfaces of Al oxide-based ceramic catalyst by SPM contact mode, and by so-called Multi Data Mode (MDM) method for comparison. We failed to obtain any reasonable image of the highly-developed surfaces of above material by the first method but were successful in doing so when applied the second one. The topographic and complimentary images obtained by MDM probing with high resolution are discussed to show a full range of the applications possible using MDM.
Interference Mitigation Effects on Synthetic Aperture Radar Coherent Data Products
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musgrove, Cameron
2014-05-01
For synthetic aperture radar image products interference can degrade the quality of the images while techniques to mitigate the interference also reduce the image quality. Usually the radar system designer will try to balance the amount of mitigation for the amount of interference to optimize the image quality. This may work well for many situations, but coherent data products derived from the image products are more sensitive than the human eye to distortions caused by interference and mitigation of interference. This dissertation examines the e ect that interference and mitigation of interference has upon coherent data products. An improvement tomore » the standard notch mitigation is introduced, called the equalization notch. Other methods are suggested to mitigation interference while improving the quality of coherent data products over existing methods.« less
Multiple Active Contours Guided by Differential Evolution for Medical Image Segmentation
Cruz-Aceves, I.; Avina-Cervantes, J. G.; Lopez-Hernandez, J. M.; Rostro-Gonzalez, H.; Garcia-Capulin, C. H.; Torres-Cisneros, M.; Guzman-Cabrera, R.
2013-01-01
This paper presents a new image segmentation method based on multiple active contours guided by differential evolution, called MACDE. The segmentation method uses differential evolution over a polar coordinate system to increase the exploration and exploitation capabilities regarding the classical active contour model. To evaluate the performance of the proposed method, a set of synthetic images with complex objects, Gaussian noise, and deep concavities is introduced. Subsequently, MACDE is applied on datasets of sequential computed tomography and magnetic resonance images which contain the human heart and the human left ventricle, respectively. Finally, to obtain a quantitative and qualitative evaluation of the medical image segmentations compared to regions outlined by experts, a set of distance and similarity metrics has been adopted. According to the experimental results, MACDE outperforms the classical active contour model and the interactive Tseng method in terms of efficiency and robustness for obtaining the optimal control points and attains a high accuracy segmentation. PMID:23983809
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Hyeonggon; Attota, Ravikiran, E-mail: ravikiran.attota@nist.gov; Tondare, Vipin
We present a method that uses conventional optical microscopes to determine the number of nanoparticles in a cluster, which is typically not possible using traditional image-based optical methods due to the diffraction limit. The method, called through-focus scanning optical microscopy (TSOM), uses a series of optical images taken at varying focus levels to achieve this. The optical images cannot directly resolve the individual nanoparticles, but contain information related to the number of particles. The TSOM method makes use of this information to determine the number of nanoparticles in a cluster. Initial good agreement between the simulations and the measurements ismore » also presented. The TSOM method can be applied to fluorescent and non-fluorescent as well as metallic and non-metallic nano-scale materials, including soft materials, making it attractive for tag-less, high-speed, optical analysis of nanoparticles down to 45 nm diameter.« less
Split Bregman's optimization method for image construction in compressive sensing
NASA Astrophysics Data System (ADS)
Skinner, D.; Foo, S.; Meyer-Bäse, A.
2014-05-01
The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.
Gurcan, Metin N; Tomaszewski, John; Overton, James A; Doyle, Scott; Ruttenberg, Alan; Smith, Barry
2017-02-01
Interoperability across data sets is a key challenge for quantitative histopathological imaging. There is a need for an ontology that can support effective merging of pathological image data with associated clinical and demographic data. To foster organized, cross-disciplinary, information-driven collaborations in the pathological imaging field, we propose to develop an ontology to represent imaging data and methods used in pathological imaging and analysis, and call it Quantitative Histopathological Imaging Ontology - QHIO. We apply QHIO to breast cancer hot-spot detection with the goal of enhancing reliability of detection by promoting the sharing of data between image analysts. Copyright © 2016 Elsevier Inc. All rights reserved.
Robust image watermarking using DWT and SVD for copyright protection
NASA Astrophysics Data System (ADS)
Harjito, Bambang; Suryani, Esti
2017-02-01
The Objective of this paper is proposed a robust combined Discrete Wavelet Transform (DWT) and Singular Value Decomposition (SVD). The RGB image is called a cover medium, and watermark image is converted into gray scale. Then, they are transformed using DWT so that they can be split into several subbands, namely sub-band LL2, LH2, HL2. The watermark image embeds into the cover medium on sub-band LL2. This scheme aims to obtain the higher robustness level than the previous method which performs of SVD matrix factorization image for copyright protection. The experiment results show that the proposed method has robustness against several image processing attacks such as Gaussian, Poisson and Salt and Pepper Noise. In these attacks, noise has average Normalized Correlation (NC) values of 0.574863 0.889784, 0.889782 respectively. The watermark image can be detected and extracted.
Background oriented schlieren in a density stratified fluid.
Verso, Lilly; Liberzon, Alex
2015-10-01
Non-intrusive quantitative fluid density measurement methods are essential in the stratified flow experiments. Digital imaging leads to synthetic schlieren methods in which the variations of the index of refraction are reconstructed computationally. In this study, an extension to one of these methods, called background oriented schlieren, is proposed. The extension enables an accurate reconstruction of the density field in stratified liquid experiments. Typically, the experiments are performed by the light source, background pattern, and the camera positioned on the opposite sides of a transparent vessel. The multimedia imaging through air-glass-water-glass-air leads to an additional aberration that destroys the reconstruction. A two-step calibration and image remapping transform are the key components that correct the images through the stratified media and provide a non-intrusive full-field density measurements of transparent liquids.
Disocclusion: a variational approach using level lines.
Masnou, Simon
2002-01-01
Object recognition, robot vision, image and film restoration may require the ability to perform disocclusion. We call disocclusion the recovery of occluded areas in a digital image by interpolation from their vicinity. It is shown in this paper how disocclusion can be performed by means of the level-lines structure, which offers a reliable, complete and contrast-invariant representation of images. Level-lines based disocclusion yields a solution that may have strong discontinuities. The proposed method is compatible with Kanizsa's amodal completion theory.
Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds
NASA Astrophysics Data System (ADS)
Boerner, R.; Kröhnert, M.
2016-06-01
3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-01-01
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
2018-02-26
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Red Lesion Detection Using Dynamic Shape Features for Diabetic Retinopathy Screening.
Seoud, Lama; Hurtut, Thomas; Chelbi, Jihed; Cheriet, Farida; Langlois, J M Pierre
2016-04-01
The development of an automatic telemedicine system for computer-aided screening and grading of diabetic retinopathy depends on reliable detection of retinal lesions in fundus images. In this paper, a novel method for automatic detection of both microaneurysms and hemorrhages in color fundus images is described and validated. The main contribution is a new set of shape features, called Dynamic Shape Features, that do not require precise segmentation of the regions to be classified. These features represent the evolution of the shape during image flooding and allow to discriminate between lesions and vessel segments. The method is validated per-lesion and per-image using six databases, four of which are publicly available. It proves to be robust with respect to variability in image resolution, quality and acquisition system. On the Retinopathy Online Challenge's database, the method achieves a FROC score of 0.420 which ranks it fourth. On the Messidor database, when detecting images with diabetic retinopathy, the proposed method achieves an area under the ROC curve of 0.899, comparable to the score of human experts, and it outperforms state-of-the-art approaches.
Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung
2017-01-01
The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783
Extreme 3D reconstruction of the final ROSETTA/PHILAE landing site
NASA Astrophysics Data System (ADS)
Capanna, Claire; Jorda, Laurent; Lamy, Philippe; Gesquiere, Gilles; Delmas, Cédric; Durand, Joelle; Garmier, Romain; Gaudon, Philippe; Jurado, Eric
2016-04-01
The Philae lander aboard the Rosetta spacecraft successfully landed at the surface of comet 67P/Churyumov-Gerasimenko (hereafter 67P/C-G) after two rebounds on November 12, 2014. The final landing site, now known as « Abydos », has been identified on images acquired by the OSIRIS imaging system onboard the Rosetta orbiter[1]. The available images of Abydos are very limited in number and reveal a very extreme topography containing cliffs and overhangs. Furthermore, the surface is only observed under very high incidence angles of 60° on average, which implies that the images also exhibit lots of cast shadows. This makes it very difficult to reconstruct the 3D topography with standard methods such as photogrammetry or standard clinometry. We apply a new method called ''Multiresolution PhotoClinometry by Deformation'' (MPCD, [2]) to retrieve the 3D topography of the area around Abydos. The method works in two main steps: (i) a DTM of this region is extracted from a low resolution MPCD global shape model of comet 67P/C-G, and (ii) the resulting triangular mesh is progressively deformed at increasing spatial sampling down to 0.25 m in order to match a set of 14 images of Abydos with projected pixel scales between 1 and 8 m. The method used to perform the image matching is a quasi-Newton non-linear optimization method called L-BFGS-b[3] especially suited to large-scale problems. Finally, we also checked the compatibility of the final MPCD digital terrain model with a set of five panoramic images obtained by the CIVA-P instrument aboard Philae[4]. [1] Lamy et al., 2016, submitted. [2] Capanna et al., Three dimensional reconstruction using multiresoluton photoclinometry by deformation, The visual Computer, v. 29(6-8) pp. 825-835, 2013. [3] Morales et al., Remark on "Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization", v.38(1) pp.1-4, ACM Trans. Math. Softw., 2011 [4] Bibring et al., 67P/Churyumov-Gerasimenko surface properties as derived from CIVA panoramic images, Science, v. 349(6247), 2015
Random forest regression for magnetic resonance image synthesis.
Jog, Amod; Carass, Aaron; Roy, Snehashis; Pham, Dzung L; Prince, Jerry L
2017-01-01
By choosing different pulse sequences and their parameters, magnetic resonance imaging (MRI) can generate a large variety of tissue contrasts. This very flexibility, however, can yield inconsistencies with MRI acquisitions across datasets or scanning sessions that can in turn cause inconsistent automated image analysis. Although image synthesis of MR images has been shown to be helpful in addressing this problem, an inability to synthesize both T 2 -weighted brain images that include the skull and FLuid Attenuated Inversion Recovery (FLAIR) images has been reported. The method described herein, called REPLICA, addresses these limitations. REPLICA is a supervised random forest image synthesis approach that learns a nonlinear regression to predict intensities of alternate tissue contrasts given specific input tissue contrasts. Experimental results include direct image comparisons between synthetic and real images, results from image analysis tasks on both synthetic and real images, and comparison against other state-of-the-art image synthesis methods. REPLICA is computationally fast, and is shown to be comparable to other methods on tasks they are able to perform. Additionally REPLICA has the capability to synthesize both T 2 -weighted images of the full head and FLAIR images, and perform intensity standardization between different imaging datasets. Copyright © 2016 Elsevier B.V. All rights reserved.
Varying face occlusion detection and iterative recovery for face recognition
NASA Astrophysics Data System (ADS)
Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei
2017-05-01
In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.
Image reconstruction of muon tomographic data using a density-based clustering method
NASA Astrophysics Data System (ADS)
Perry, Kimberly B.
Muons are subatomic particles capable of reaching the Earth's surface before decaying. When these particles collide with an object that has a high atomic number (Z), their path of travel changes substantially. Tracking muon movement through shielded containers can indicate what types of materials lie inside. This thesis proposes using a density-based clustering algorithm called OPTICS to perform image reconstructions using muon tomographic data. The results show that this method is capable of detecting high-Z materials quickly, and can also produce detailed reconstructions with large amounts of data.
Matharu, J; Hale, B; Ammar, M; Brennan, P A
2016-10-01
With the widespread use of smartphones, text messaging has become an accepted form of communication for both social and professional use in medicine. To our knowledge no published studies have assessed the prevalence and use of short message service (SMS) texting by doctors on call. We have used an online questionnaire to seek information from doctors in a large NHS Trust in the UK about their use of texting while on call, what they use it for, and whether they send images relevant to patients' care. We received 302 responses (43% response rate), of whom 166 (55%) used SMS while on call. There was a significant association between SMS and age group (p=0.005), with the 20-30-year-old group using it much more than the other age groups. Doctors in the surgical specialties used it significantly less than those in other speciality groups (p<0 .001). Texting while on call was deemed to be safe and reliable (p<0.001). Eighteen clinicians (11%) admitted to routinely sending images of patients by text, despite some being identifiable. Texting was mainly used to update colleagues on patients' progress and give information about times of ward rounds and meetings. With the increasing use of texting in healthcare, much of which seems to be unregulated, further work and detailed guidance is required on what information may be given to ensure confidentiality and that SMS is a safe and acceptable method of communication to use when on call. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
An approach of point cloud denoising based on improved bilateral filtering
NASA Astrophysics Data System (ADS)
Zheng, Zeling; Jia, Songmin; Zhang, Guoliang; Li, Xiuzhi; Zhang, Xiangyin
2018-04-01
An omnidirectional mobile platform is designed for building point cloud based on an improved filtering algorithm which is employed to handle the depth image. First, the mobile platform can move flexibly and the control interface is convenient to control. Then, because the traditional bilateral filtering algorithm is time-consuming and inefficient, a novel method is proposed which called local bilateral filtering (LBF). LBF is applied to process depth image obtained by the Kinect sensor. The results show that the effect of removing noise is improved comparing with the bilateral filtering. In the condition of off-line, the color images and processed images are used to build point clouds. Finally, experimental results demonstrate that our method improves the speed of processing time of depth image and the effect of point cloud which has been built.
Introduction to Seismic Tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowe, Charlotte Anne
2017-11-21
Tomography is a method of obtaining an image of a 3d object by observing the behavior of energy transmissions through the object. The image is obtained by Interrogating the object with Energy sources at a variety of Locations and observing the Object’s effects on the energy at a Variety of sensors. Tomography was first Used to build 3-dimensional Scans through Human bodies. These Are called computed Tomographic (ct) scans.
Charged-particle emission tomography
Ding, Yijun; Caucci, Luca; Barrett, Harrison H.
2018-01-01
Purpose Conventional charged-particle imaging techniques —such as autoradiography —provide only two-dimensional (2D) black ex vivo images of thin tissue slices. In order to get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick tissue sections, thus increasing laboratory throughput and eliminating distortions due to registration. CPET also has the potential to enable in vivo charged-particle imaging with a window chamber or an endoscope. Methods Our approach to charged-particle emission tomography uses particle-processing detectors (PPDs) to estimate attributes of each detected particle. The attributes we estimate include location, direction of propagation, and/or the energy deposited in the detector. Estimated attributes are then fed into a reconstruction algorithm to reconstruct the 3D distribution of charged-particle-emitting radionuclides. Several setups to realize PPDs are designed. Reconstruction algorithms for CPET are developed. Results Reconstruction results from simulated data showed that a PPD enables CPET if the PPD measures more attributes than just the position from each detected particle. Experiments showed that a two-foil charged-particle detector is able to measure the position and direction of incident alpha particles. Conclusions We proposed a new volumetric imaging technique for charged-particle-emitting radionuclides, which we have called charged-particle emission tomography (CPET). We also proposed a new class of charged-particle detectors, which we have called particle-processing detectors (PPDs). When a PPD is used to measure the direction and/or energy attributes along with the position attributes, CPET is feasible. PMID:28370094
Estimating 3D topographic map of optic nerve head from a single fundus image
NASA Astrophysics Data System (ADS)
Wang, Peipei; Sun, Jiuai
2018-04-01
Optic nerve head also called optic disc is the distal portion of optic nerve locating and clinically visible on the retinal surface. It is a 3 dimensional elliptical shaped structure with a central depression called the optic cup. This shape of the ONH and the size of the depression can be varied due to different retinopathy or angiopathy, therefore the estimation of topography of optic nerve head is significant for assisting diagnosis of those retinal related complications. This work describes a computer vision based method, i.e. shape from shading (SFS) to recover and visualize 3D topographic map of optic nerve head from a normal fundus image. The work is expected helpful for assessing those complications associated the deformation of optic nerve head such as glaucoma and diabetes. The illumination is modelled as uniform over the area around optic nerve head and its direction estimated from the available image. The Tsai discrete method has been employed to recover the 3D topographic map of the optic nerve head. The initial experimental result demonstrates our approach works on most of fundus images and provides a cheap, but good alternation for rendering and visualizing the topographic information of the optic nerve head for potential clinical use.
Hierarchical Feature Extraction With Local Neural Response for Image Recognition.
Li, Hong; Wei, Yantao; Li, Luoqing; Chen, C L P
2013-04-01
In this paper, a hierarchical feature extraction method is proposed for image recognition. The key idea of the proposed method is to extract an effective feature, called local neural response (LNR), of the input image with nontrivial discrimination and invariance properties by alternating between local coding and maximum pooling operation. The local coding, which is carried out on the locally linear manifold, can extract the salient feature of image patches and leads to a sparse measure matrix on which maximum pooling is carried out. The maximum pooling operation builds the translation invariance into the model. We also show that other invariant properties, such as rotation and scaling, can be induced by the proposed model. In addition, a template selection algorithm is presented to reduce computational complexity and to improve the discrimination ability of the LNR. Experimental results show that our method is robust to local distortion and clutter compared with state-of-the-art algorithms.
Reconstruction of noisy and blurred images using blur kernel
NASA Astrophysics Data System (ADS)
Ellappan, Vijayan; Chopra, Vishal
2017-11-01
Blur is a common in so many digital images. Blur can be caused by motion of the camera and scene object. In this work we proposed a new method for deblurring images. This work uses sparse representation to identify the blur kernel. By analyzing the image coordinates Using coarse and fine, we fetch the kernel based image coordinates and according to that observation we get the motion angle of the shaken or blurred image. Then we calculate the length of the motion kernel using radon transformation and Fourier for the length calculation of the image and we use Lucy Richardson algorithm which is also called NON-Blind(NBID) Algorithm for more clean and less noisy image output. All these operation will be performed in MATLAB IDE.
Adaptive compressive ghost imaging based on wavelet trees and sparse representation.
Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie
2014-03-24
Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.
Farrelly, Matthew C; Davis, Kevin C; Nonnemaker, James M; Kamyab, Kian; Jackson, Christine
2011-07-01
To understand the relative effectiveness of television advertisements that differ in their thematic focus and portrayals of negative emotions and/or graphic images in promoting calls to a smokers' quitline. Regression analysis is used to explain variation in quarterly media market-level per smoker calls to the New York State Smokers' Quitline from 2001 to 2009. The primary independent variable is quarterly market-level delivery of television advertisements measured by target audience rating points (TARPs). Advertisements were characterised by their overall objective--promoting cessation, highlighting the dangers of secondhand smoke (SHS) or other--and by their portrayals of strong negative emotions and graphic images. Per smoker call volume is positively correlated with total TARPs (p<0.001), and cessation advertisements are more effective than SHS advertisements in promoting quitline call volume. Advertisements with graphic images only or neither strong negative emotions nor graphic images are associated with higher call volume with similar effect sizes. Call volume was not significantly associated with the number of TARPs for advertisements with strong negative emotions only (p=0.71) or with both graphic images and strong emotions (p=0.09). Exposure to television advertisements is strongly associated with quitline call volume, and both cessation and SHS advertisements can be effective. The use of strong negative emotions in advertisements may be effective in promoting smoking cessation in the population but does not appear to influence quitline call volume. Further research is needed to understand the role of negative emotions in promoting calls to quitlines and cessation more broadly among the majority of smokers who do not call quitlines.
Zhu, Wensheng; Yuan, Ying; Zhang, Jingwen; Zhou, Fan; Knickmeyer, Rebecca C; Zhu, Hongtu
2017-02-01
The aim of this paper is to systematically evaluate a biased sampling issue associated with genome-wide association analysis (GWAS) of imaging phenotypes for most imaging genetic studies, including the Alzheimer's Disease Neuroimaging Initiative (ADNI). Specifically, the original sampling scheme of these imaging genetic studies is primarily the retrospective case-control design, whereas most existing statistical analyses of these studies ignore such sampling scheme by directly correlating imaging phenotypes (called the secondary traits) with genotype. Although it has been well documented in genetic epidemiology that ignoring the case-control sampling scheme can produce highly biased estimates, and subsequently lead to misleading results and suspicious associations, such findings are not well documented in imaging genetics. We use extensive simulations and a large-scale imaging genetic data analysis of the Alzheimer's Disease Neuroimaging Initiative (ADNI) data to evaluate the effects of the case-control sampling scheme on GWAS results based on some standard statistical methods, such as linear regression methods, while comparing it with several advanced statistical methods that appropriately adjust for the case-control sampling scheme. Copyright © 2016 Elsevier Inc. All rights reserved.
Zero echo time MRI-only treatment planning for radiation therapy of brain tumors after resection.
Boydev, C; Demol, B; Pasquier, D; Saint-Jalmes, H; Delpon, G; Reynaert, N
2017-10-01
Using magnetic resonance imaging (MRI) as the sole imaging modality for patient modeling in radiation therapy (RT) is a challenging task due to the need to derive electron density information from MRI and construct a so-called pseudo-computed tomography (pCT) image. We have previously published a new method to derive pCT images from head T1-weighted (T1-w) MR images using a single-atlas propagation scheme followed by a post hoc correction of the mapped CT numbers using local intensity information. The purpose of this study was to investigate the performance of our method with head zero echo time (ZTE) MR images. To evaluate results, the mean absolute error in bins of 20 HU was calculated with respect to the true planning CT scan of the patient. We demonstrated that applying our method using ZTE MR images instead of T1-w improved the correctness of the pCT in case of bone resection surgery prior to RT (that is, an example of large anatomical difference between the atlas and the patient). Copyright © 2017. Published by Elsevier Ltd.
A spot-matching method using cumulative frequency matrix in 2D gel images
Han, Chan-Myeong; Park, Joon-Ho; Chang, Chu-Seok; Ryoo, Myung-Chun
2014-01-01
A new method for spot matching in two-dimensional gel electrophoresis images using a cumulative frequency matrix is proposed. The method improves on the weak points of the previous method called ‘spot matching by topological patterns of neighbour spots’. It accumulates the frequencies of neighbour spot pairs produced through the entire matching process and determines spot pairs one by one in order of higher frequency. Spot matching by frequencies of neighbour spot pairs shows a fairly better performance. However, it can give researchers a hint for whether the matching results can be trustworthy or not, which can save researchers a lot of effort for verification of the results. PMID:26019609
NASA Astrophysics Data System (ADS)
Gui, Luying; He, Jian; Qiu, Yudong; Yang, Xiaoping
2017-01-01
This paper presents a variational level set approach to segment lesions with compact shapes on medical images. In this study, we investigate to address the problem of segmentation for hepatocellular carcinoma which are usually of various shapes, variable intensities, and weak boundaries. An efficient constraint which is called the isoperimetric constraint to describe the compactness of shapes is applied in this method. In addition, in order to ensure the precise segmentation and stable movement of the level set, a distance regularization is also implemented in the proposed variational framework. Our method is applied to segment various hepatocellular carcinoma regions on Computed Tomography images with promising results. Comparison results also prove that the proposed method is more accurate than other two approaches.
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B
2015-10-06
Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.
Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M.; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E.; Allen, Peter J.; Sempere, Lorenzo F.; Haab, Brian B.
2016-01-01
Certain experiments involve the high-throughput quantification of image data, thus requiring algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multi-color, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu’s method for selected images. SFT promises to advance the goal of full automation in image analysis. PMID:26339978
Iris Image Classification Based on Hierarchical Visual Codebook.
Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang
2014-06-01
Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.
Sheehy, O; Gendron, M-P; Martin, B; Bérard, A
2012-06-01
IMAGe provides information on risks and benefits of medication use during pregnancy and lactation. The aim of this study was to determine the impact of Health Canada warnings on the number of calls received at IMAGe. We analyzed calls received between January 2003 and March 2008. The impact of the following warning/withdrawal were studied: paroxetine and risk of cardiac malformations (09/29/2005), selective serotonin reuptake inhibitors (SSRIs) and risk of persistent pulmonary hypertension of the newborn (PPHN) (03/10/2006), and impact of rofecoxib market withdrawal (09/30/2004). Interrupted auto-regressive integrated -moving average (ARIMA) analyses were used to test the impact of each warning on the number of calls received to IMAGe. 61,505 calls were analyzed. The paroxetine warning had a temporary impact increasing the overall number of calls to IMAGe, and an abrupt permanent effect on the number of calls related to antidepressant exposures. The PPHN warning had no impact but we observed a significant increase in the number of calls following rofecoxib market withdrawal. Health Canada needs to consider the increase in the demand of information to IMAGe following warnings on the risk of medication use during pregnancy. © Georg Thieme Verlag KG Stuttgart · New York.
Face recognition via sparse representation of SIFT feature on hexagonal-sampling image
NASA Astrophysics Data System (ADS)
Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong
2018-04-01
This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.
Imaging isodensity contours of molecular states with STM
NASA Astrophysics Data System (ADS)
Reecht, Gaël; Heinrich, Benjamin W.; Bulou, Hervé; Scheurer, Fabrice; Limot, Laurent; Schull, Guillaume
2017-11-01
We present an improved way for imaging the density of states of a sample with a scanning tunneling microscope, which consists in mapping the surface topography while keeping the differential conductance (dI/dV) constant. When archetypical C60 molecules on Cu(111) are imaged with this method, these so-called iso-dI/dV maps are in excellent agreement with theoretical simulations of the isodensity contours of the molecular orbitals. A direct visualization and unambiguous identification of superatomic C60 orbitals and their hybridization is then possible.
Towards local estimation of emphysema progression using image registration
NASA Astrophysics Data System (ADS)
Staring, M.; Bakker, M. E.; Shamonin, D. P.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.
2009-02-01
Progression measurement of emphysema is required to evaluate the health condition of a patient and the effect of drugs. To locally estimate progression we use image registration, which allows for volume correction using the determinant of the Jacobian of the transformation. We introduce an adaptation of the so-called sponge model that circumvents its constant-mass assumption. Preliminary results from CT scans of a lung phantom and from CT data sets of three patients suggest that image registration may be a suitable method to locally estimate emphysema progression.
Study on real-time images compounded using spatial light modulator
NASA Astrophysics Data System (ADS)
Xu, Jin; Chen, Zhebo; Ni, Xuxiang; Lu, Zukang
2007-01-01
Image compounded technology is often used on film and its facture. In common, image compounded use image processing arithmetic, get useful object, details, background or some other things from the images firstly, then compounding all these information into one image. When using this method, the film system needs a powerful processor, for the process function is very complex, we get the compounded image for a few time delay. In this paper, we introduce a new method of image real-time compounded, use this method, we can do image composite at the same time with movie shot. The whole system is made up of two camera-lens, spatial light modulator array and image sensor. In system, the spatial light modulator could be liquid crystal display (LCD), liquid crystal on silicon (LCoS), thin film transistor liquid crystal display (TFTLCD), Deformable Micro-mirror Device (DMD), and so on. Firstly, one camera-lens images the object on the spatial light modulator's panel, we call this camera-lens as first image lens. Secondly, we output an image to the panel of spatial light modulator. Then, the image of the object and image that output by spatial light modulator will be spatial compounded on the panel of spatial light modulator. Thirdly, the other camera-lens images the compounded image to the image sensor, and we call this camera-lens as second image lens. After these three steps, we will gain the compound images by image sensor. For the spatial light modulator could output the image continuously, then the image will be compounding continuously too, and the compounding procedure is completed in real-time. When using this method to compounding image, if we will put real object into invented background, we can output the invented background scene on the spatial light modulator, and the real object will be imaged by first image lens. Then, we get the compounded images by image sensor in real time. The same way, if we will put real background to an invented object, we can output the invented object on the spatial light modulator and the real background will be imaged by first image lens. Then, we can also get the compounded images by image sensor real time. Commonly, most spatial light modulator only can do modulate light intensity, so we can only do compounding BW images if use only one panel which without color filter. If we will get colorful compounded image, we need use the system like three spatial light modulator panel projection. In the paper, the system's optical system framework we will give out. In all experiment, the spatial light modulator used liquid crystal on silicon (LCoS). At the end of the paper, some original pictures and compounded pictures will be given on it. Although the system has a few shortcomings, we can conclude that, using this system to compounding images has no delay to do mathematic compounding process, it is a really real time images compounding system.
Detecting the Edge of the Tongue: A Tutorial
ERIC Educational Resources Information Center
Iskarous, Khalil
2005-01-01
The goal of this paper is to provide a tutorial introduction to the topic of edge detection of the tongue from ultrasound scans for researchers in speech science and phonetics. The method introduced here is Active Contours (also called snakes), a method for searching for an edge, assuming that it is a smooth curve in the image data. The advantage…
Khang, Hyun Soo; Lee, Byung Il; Oh, Suk Hoon; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyoung; Kwon, Ohin; Yoon, Jeong Rock; Seo, Jin Keun
2002-06-01
Recently, a new static resistivity image reconstruction algorithm is proposed utilizing internal current density data obtained by magnetic resonance current density imaging technique. This new imaging method is called magnetic resonance electrical impedance tomography (MREIT). The derivation and performance of J-substitution algorithm in MREIT have been reported as a new accurate and high-resolution static impedance imaging technique via computer simulation methods. In this paper, we present experimental procedures, denoising techniques, and image reconstructions using a 0.3-tesla (T) experimental MREIT system and saline phantoms. MREIT using J-substitution algorithm effectively utilizes the internal current density information resolving the problem inherent in a conventional EIT, that is, the low sensitivity of boundary measurements to any changes of internal tissue resistivity values. Resistivity images of saline phantoms show an accuracy of 6.8%-47.2% and spatial resolution of 64 x 64. Both of them can be significantly improved by using an MRI system with a better signal-to-noise ratio.
On the importance of mathematical methods for analysis of MALDI-imaging mass spectrometry data.
Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore
2012-03-21
In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 10⁸ to 10⁹ intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.
On the Importance of Mathematical Methods for Analysis of MALDI-Imaging Mass Spectrometry Data.
Trede, Dennis; Kobarg, Jan Hendrik; Oetjen, Janina; Thiele, Herbert; Maass, Peter; Alexandrov, Theodore
2012-03-01
In the last decade, matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS), also called as MALDI-imaging, has proven its potential in proteomics and was successfully applied to various types of biomedical problems, in particular to histopathological label-free analysis of tissue sections. In histopathology, MALDI-imaging is used as a general analytic tool revealing the functional proteomic structure of tissue sections, and as a discovery tool for detecting new biomarkers discriminating a region annotated by an experienced histologist, in particular, for cancer studies. A typical MALDI-imaging data set contains 108 to 109 intensity values occupying more than 1 GB. Analysis and interpretation of such huge amount of data is a mathematically, statistically and computationally challenging problem. In this paper we overview some computational methods for analysis of MALDI-imaging data sets. We discuss the importance of data preprocessing, which typically includes normalization, baseline removal and peak picking, and hightlight the importance of image denoising when visualizing IMS data.
Fourier-based automatic alignment for improved Visual Cryptography schemes.
Machizaud, Jacques; Chavel, Pierre; Fournel, Thierry
2011-11-07
In Visual Cryptography, several images, called "shadow images", that separately contain no information, are overlapped to reveal a shared secret message. We develop a method to digitally register one printed shadow image acquired by a camera with a purely digital shadow image, stored in memory. Using Fourier techniques derived from Fourier Optics concepts, the idea is to enhance and exploit the quasi periodicity of the shadow images, composed by a random distribution of black and white patterns on a periodic sampling grid. The advantage is to speed up the security control or the access time to the message, in particular in the cases of a small pixel size or of large numbers of pixels. Furthermore, the interest of visual cryptography can be increased by embedding the initial message in two shadow images that do not have identical mathematical supports, making manual registration impractical. Experimental results demonstrate the successful operation of the method, including the possibility to directly project the result onto the printed shadow image.
Review of Image Quality Measures for Solar Imaging
NASA Astrophysics Data System (ADS)
Popowicz, Adam; Radlak, Krystian; Bernacki, Krzysztof; Orlov, Valeri
2017-12-01
Observations of the solar photosphere from the ground encounter significant problems caused by Earth's turbulent atmosphere. Before image reconstruction techniques can be applied, the frames obtained in the most favorable atmospheric conditions (the so-called lucky frames) have to be carefully selected. However, estimating the quality of images containing complex photospheric structures is not a trivial task, and the standard routines applied in nighttime lucky imaging observations are not applicable. In this paper we evaluate 36 methods dedicated to the assessment of image quality, which were presented in the literature over the past 40 years. We compare their effectiveness on simulated solar observations of both active regions and granulation patches, using reference data obtained by the Solar Optical Telescope on the Hinode satellite. To create images that are affected by a known degree of atmospheric degradation, we employed the random wave vector method, which faithfully models all the seeing characteristics. The results provide useful information about the method performances, depending on the average seeing conditions expressed by the ratio of the telescope's aperture to the Fried parameter, D/r0. The comparison identifies three methods for consideration by observers: Helmli and Scherer's mean, the median filter gradient similarity, and the discrete cosine transform energy ratio. While the first method requires less computational effort and can be used effectively in virtually any atmospheric conditions, the second method shows its superiority at good seeing (D/r0<4). The third method should mainly be considered for the post-processing of strongly blurred images.
Autocorrelation techniques for soft photogrammetry
NASA Astrophysics Data System (ADS)
Yao, Wu
In this thesis research is carried out on image processing, image matching searching strategies, feature type and image matching, and optimal window size in image matching. To make comparisons, the soft photogrammetry package SoftPlotter is used. Two aerial photographs from the Iowa State University campus high flight 94 are scanned into digital format. In order to create a stereo model from them, interior orientation, single photograph rectification and stereo rectification are done. Two new image matching methods, multi-method image matching (MMIM) and unsquare window image matching are developed and compared. MMIM is used to determine the optimal window size in image matching. Twenty four check points from four different types of ground features are used for checking the results from image matching. Comparison between these four types of ground feature shows that the methods developed here improve the speed and the precision of image matching. A process called direct transformation is described and compared with the multiple steps in image processing. The results from image processing are consistent with those from SoftPlotter. A modified LAN image header is developed and used to store the information about the stereo model and image matching. A comparison is also made between cross correlation image matching (CCIM), least difference image matching (LDIM) and least square image matching (LSIM). The quality of image matching in relation to ground features are compared using two methods developed in this study, the coefficient surface for CCIM and the difference surface for LDIM. To reduce the amount of computation in image matching, the best-track searching algorithm, developed in this research, is used instead of the whole range searching algorithm.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
2011-01-01
Background Image segmentation is a crucial step in quantitative microscopy that helps to define regions of tissues, cells or subcellular compartments. Depending on the degree of user interactions, segmentation methods can be divided into manual, automated or semi-automated approaches. 3D image stacks usually require automated methods due to their large number of optical sections. However, certain applications benefit from manual or semi-automated approaches. Scenarios include the quantification of 3D images with poor signal-to-noise ratios or the generation of so-called ground truth segmentations that are used to evaluate the accuracy of automated segmentation methods. Results We have developed Gebiss; an ImageJ plugin for the interactive segmentation, visualisation and quantification of 3D microscopic image stacks. We integrated a variety of existing plugins for threshold-based segmentation and volume visualisation. Conclusions We demonstrate the application of Gebiss to the segmentation of nuclei in live Drosophila embryos and the quantification of neurodegeneration in Drosophila larval brains. Gebiss was developed as a cross-platform ImageJ plugin and is freely available on the web at http://imaging.bii.a-star.edu.sg/projects/gebiss/. PMID:21668958
NASA Astrophysics Data System (ADS)
Mrozek, T.; Perlicki, K.; Tajmajer, T.; Wasilewski, P.
2017-08-01
The article presents an image analysis method, obtained from an asynchronous delay tap sampling (ADTS) technique, which is used for simultaneous monitoring of various impairments occurring in the physical layer of the optical network. The ADTS method enables the visualization of the optical signal in the form of characteristics (so called phase portraits) that change their shape under the influence of impairments such as chromatic dispersion, polarization mode dispersion and ASE noise. Using this method, a simulation model was built with OptSim 4.0. After the simulation study, data were obtained in the form of images that were further analyzed using the convolutional neural network algorithm. The main goal of the study was to train a convolutional neural network to recognize the selected impairment (distortion); then to test its accuracy and estimate the impairment for the selected set of test images. The input data consisted of processed binary images in the form of two-dimensional matrices, with the position of the pixel. This article focuses only on the analysis of images containing chromatic dispersion.
Live CLEM imaging to analyze nuclear structures at high resolution.
Haraguchi, Tokuko; Osakada, Hiroko; Koujin, Takako
2015-01-01
Fluorescence microscopy (FM) and electron microscopy (EM) are powerful tools for observing molecular components in cells. FM can provide temporal information about cellular proteins and structures in living cells. EM provides nanometer resolution images of cellular structures in fixed cells. We have combined FM and EM to develop a new method of correlative light and electron microscopy (CLEM), called "Live CLEM." In this method, the dynamic behavior of specific molecules of interest is first observed in living cells using fluorescence microscopy (FM) and then cellular structures in the same cell are observed using electron microscopy (EM). Following image acquisition, FM and EM images are compared to enable the fluorescent images to be correlated with the high-resolution images of cellular structures obtained using EM. As this method enables analysis of dynamic events involving specific molecules of interest in the context of specific cellular structures at high resolution, it is useful for the study of nuclear structures including nuclear bodies. Here we describe Live CLEM that can be applied to the study of nuclear structures in mammalian cells.
Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.
Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N
2017-05-01
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-07-21
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.
Nguyen, Dat Tien; Park, Kang Ryoung
2016-01-01
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images. PMID:27455264
An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems.
Glover, Jack L; Hudson, Lawrence T
2016-06-01
The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard.
An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems
Glover, Jack L.; Hudson, Lawrence T.
2016-01-01
The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard. PMID:27499586
An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems
NASA Astrophysics Data System (ADS)
Glover, Jack L.; Hudson, Lawrence T.
2016-06-01
The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in an international aviation security standard.
Digital imaging biomarkers feed machine learning for melanoma screening.
Gareau, Daniel S; Correa da Rosa, Joel; Yagerman, Sarah; Carucci, John A; Gulati, Nicholas; Hueto, Ferran; DeFazio, Jennifer L; Suárez-Fariñas, Mayte; Marghoob, Ashfaq; Krueger, James G
2017-07-01
We developed an automated approach for generating quantitative image analysis metrics (imaging biomarkers) that are then analysed with a set of 13 machine learning algorithms to generate an overall risk score that is called a Q-score. These methods were applied to a set of 120 "difficult" dermoscopy images of dysplastic nevi and melanomas that were subsequently excised/classified. This approach yielded 98% sensitivity and 36% specificity for melanoma detection, approaching sensitivity/specificity of expert lesion evaluation. Importantly, we found strong spectral dependence of many imaging biomarkers in blue or red colour channels, suggesting the need to optimize spectral evaluation of pigmented lesions. © 2016 The Authors. Experimental Dermatology Published by John Wiley & Sons Ltd.
Multiscale hidden Markov models for photon-limited imaging
NASA Astrophysics Data System (ADS)
Nowak, Robert D.
1999-06-01
Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.
Novel algorithm by low complexity filter on retinal vessel segmentation
NASA Astrophysics Data System (ADS)
Rostampour, Samad
2011-10-01
This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained
High sensitivity contrast enhanced optical coherence tomography for functional in vivo imaging
NASA Astrophysics Data System (ADS)
Liba, Orly; SoRelle, Elliott D.; Sen, Debasish; de la Zerda, Adam
2017-02-01
In this study, we developed and applied highly-scattering large gold nanorods (LGNRs) and custom spectral detection algorithms for high sensitivity contrast-enhanced optical coherence tomography (OCT). We were able to detect LGNRs at a concentration as low as 50 pM in blood. We used this approach for noninvasive 3D imaging of blood vessels deep in solid tumors in living mice. Additionally, we demonstrated multiplexed imaging of spectrally-distinct LGNRs that enabled observations of functional drainage in lymphatic networks. This method, which we call MOZART, provides a platform for molecular imaging and characterization of tissue noninvasively at cellular resolution.
Xiao, X; Bai, B; Xu, N; Wu, K
2015-04-01
Oversegmentation is a major drawback of the morphological watershed algorithm. Here, we study and reveal that the oversegmentation is not only because of the irregular shapes of the particle images, which people are familiar with, but also because of some particles, such as ellipses, with more than one centre. A new parameter, the striping level, is introduced and the criterion for striping parameter is built to help find the right markers prior to segmentation. An adaptive striping watershed algorithm is established by applying a procedure, called the marker searching algorithm, to find the markers, which can effectively suppress the oversegmentation. The effectiveness of the proposed method is validated by analysing some typical particle images including the images of gold nanorod ensembles. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
Entropy reduction via simplified image contourization
NASA Technical Reports Server (NTRS)
Turner, Martin J.
1993-01-01
The process of contourization is presented which converts a raster image into a set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimizes noticeable artifacts in the simplified image.
Three Dimensional Visualization of GOES Cloud Data Using Octress
1993-06-01
structure for CAD of integrated circuits that can subdivide the cubes into more complex polyhedrons . Medical imaging is also taking advantage of the...CIGOES 501 FORMAT(A) CALL OPENDBCPARAM’, ISTATRM) IF (ISTATRM .NE. 0) CALL FRIMERRC Error opening database .’, "+ ISTATRM) CALL OLDIMAGE(1, CIGOES, STATUS...image name (no .ext):’ ACCEPT 501, CIGOES 501 FORMAT(A) CALL OPENDB(’PARAM’, ISTATRM) IF (ISTATRM .NE. 0) CALL FRIMERRC Error opening database
Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information
NASA Technical Reports Server (NTRS)
Pence, William D.; White, R. L.; Seaman, R.
2010-01-01
We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.
Instrument performance enhancement and modification through an extended instrument paradigm
NASA Astrophysics Data System (ADS)
Mahan, Stephen Lee
An extended instrument paradigm is proposed, developed and shown in various applications. The CBM (Chin, Blass, Mahan) method is an extension to the linear systems model of observing systems. In the most obvious and practical application of image enhancement of an instrument characterized by a time-invariant instrumental response function, CBM can be used to enhance images or spectra through a simple convolution application of the CBM filter for a resolution improvement of as much as a factor of two. The CBM method can be used in many applications. We discuss several within this work including imaging through turbulent atmospheres, or what we've called Adaptive Imaging. Adaptive Imaging provides an alternative approach for the investigator desiring results similar to those obtainable with adaptive optics, however on a minimal budget. The CBM method is also used in a backprojected filtered image reconstruction method for Positron Emission Tomography. In addition, we can use information theoretic methods to aid in the determination of model instrumental response function parameters for images having an unknown origin. Another application presented herein involves the use of the CBM method for the determination of the continuum level of a Fourier transform spectrometer observation of ethylene, which provides a means for obtaining reliable intensity measurements in an automated manner. We also present the application of CBM to hyperspectral image data of the comet Shoemaker-Levy 9 impact with Jupiter taken with an acousto-optical tunable filter equipped CCD camera to an adaptive optics telescope.
Fast and accurate denoising method applied to very high resolution optical remote sensing images
NASA Astrophysics Data System (ADS)
Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon
2017-10-01
Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.
Automation of disbond detection in aircraft fuselage through thermal image processing
NASA Technical Reports Server (NTRS)
Prabhu, D. R.; Winfree, W. P.
1992-01-01
A procedure for interpreting thermal images obtained during the nondestructive evaluation of aircraft bonded joints is presented. The procedure operates on time-derivative thermal images and resulted in a disbond image with disbonds highlighted. The size of the 'black clusters' in the output disbond image is a quantitative measure of disbond size. The procedure is illustrated using simulation data as well as data obtained through experimental testing of fabricated samples and aircraft panels. Good results are obtained, and, except in pathological cases, 'false calls' in the cases studied appeared only as noise in the output disbond image which was easily filtered out. The thermal detection technique coupled with an automated image interpretation capability will be a very fast and effective method for inspecting bonded joints in an aircraft structure.
Robust w-Estimators for Cryo-EM Class Means
Huang, Chenxi; Tagare, Hemant D.
2016-01-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the “class mean”, improves the signal-to-noise ratio in single particle reconstruction (SPR). The averaging step is often compromised because of outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods is done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a “w-estimator” of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions (CTFs) is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers. PMID:26841397
Robust w-Estimators for Cryo-EM Class Means.
Huang, Chenxi; Tagare, Hemant D
2016-02-01
A critical step in cryogenic electron microscopy (cryo-EM) image analysis is to calculate the average of all images aligned to a projection direction. This average, called the class mean, improves the signal-to-noise ratio in single-particle reconstruction. The averaging step is often compromised because of the outlier images of ice, contaminants, and particle fragments. Outlier detection and rejection in the majority of current cryo-EM methods are done using cross-correlation with a manually determined threshold. Empirical assessment shows that the performance of these methods is very sensitive to the threshold. This paper proposes an alternative: a w-estimator of the average image, which is robust to outliers and which does not use a threshold. Various properties of the estimator, such as consistency and influence function are investigated. An extension of the estimator to images with different contrast transfer functions is also provided. Experiments with simulated and real cryo-EM images show that the proposed estimator performs quite well in the presence of outliers.
X-RAY IMAGING Achieving the third dimension using coherence
Robinson, Ian; Huang, Xiaojing
2017-01-25
X-ray imaging is extensively used in medical and materials science. Traditionally, the depth dimension is obtained by turning the sample to gain different views. The famous penetrating properties of X-rays mean that projection views of the subject sample can be readily obtained in the linear absorption regime. 180 degrees of projections can then be combined using computed tomography (CT) methods to obtain a full 3D image, a technique extensively used in medical imaging. In the work now presented in Nature Materials, Stephan Hruszkewycz and colleagues have demonstrated genuine 3D imaging by a new method called 3D Bragg projection ptychography1. Ourmore » approach combines the 'side view' capability of using Bragg diffraction from a crystalline sample with the coherence capabilities of ptychography. Thus, it results in a 3D image from a 2D raster scan of a coherent beam across a sample that does not have to be rotated.« less
Effect of using different cover image quality to obtain robust selective embedding in steganography
NASA Astrophysics Data System (ADS)
Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer
2014-05-01
One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.
Computer-aided interpretation approach for optical tomographic images
NASA Astrophysics Data System (ADS)
Klose, Christian D.; Klose, Alexander D.; Netz, Uwe J.; Scheel, Alexander K.; Beuthan, Jürgen; Hielscher, Andreas H.
2010-11-01
A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) in human finger joints using optical tomographic images. The image interpretation method employs a classification algorithm that makes use of a so-called self-organizing mapping scheme to classify fingers as either affected or unaffected by RA. Unlike in previous studies, this allows for combining multiple image features, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging, and inspection of optical tomographic images), were used to produce ground truth benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities, while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances are reached when combining the minimum/maximum ratio of the absorption coefficient and image variance. In this case, sensitivities and specificities over 0.9 can be achieved. These values are much higher than values obtained when only single parameter classifications were used, where sensitivities and specificities remained well below 0.8.
Modern Micro and Nanoparticle-Based Imaging Techniques
Ryvolova, Marketa; Chomoucka, Jana; Drbohlavova, Jana; Kopel, Pavel; Babula, Petr; Hynek, David; Adam, Vojtech; Eckschlager, Tomas; Hubalek, Jaromir; Stiborova, Marie; Kaiser, Jozef; Kizek, Rene
2012-01-01
The requirements for early diagnostics as well as effective treatment of insidious diseases such as cancer constantly increase the pressure on development of efficient and reliable methods for targeted drug/gene delivery as well as imaging of the treatment success/failure. One of the most recent approaches covering both the drug delivery as well as the imaging aspects is benefitting from the unique properties of nanomaterials. Therefore a new field called nanomedicine is attracting continuously growing attention. Nanoparticles, including fluorescent semiconductor nanocrystals (quantum dots) and magnetic nanoparticles, have proven their excellent properties for in vivo imaging techniques in a number of modalities such as magnetic resonance and fluorescence imaging, respectively. In this article, we review the main properties and applications of nanoparticles in various in vitro imaging techniques, including microscopy and/or laser breakdown spectroscopy and in vivo methods such as magnetic resonance imaging and/or fluorescence-based imaging. Moreover the advantages of the drug delivery performed by nanocarriers such as iron oxides, gold, biodegradable polymers, dendrimers, lipid based carriers such as liposomes or micelles are also highlighted. PMID:23202187
2015-01-01
Retinal fundus images are widely used in diagnosing and providing treatment for several eye diseases. Prior works using retinal fundus images detected the presence of exudation with the aid of publicly available dataset using extensive segmentation process. Though it was proved to be computationally efficient, it failed to create a diabetic retinopathy feature selection system for transparently diagnosing the disease state. Also the diagnosis of diseases did not employ machine learning methods to categorize candidate fundus images into true positive and true negative ratio. Several candidate fundus images did not include more detailed feature selection technique for diabetic retinopathy. To apply machine learning methods and classify the candidate fundus images on the basis of sliding window a method called, Diabetic Fundus Image Recuperation (DFIR) is designed in this paper. The initial phase of DFIR method select the feature of optic cup in digital retinal fundus images based on Sliding Window Approach. With this, the disease state for diabetic retinopathy is assessed. The feature selection in DFIR method uses collection of sliding windows to obtain the features based on the histogram value. The histogram based feature selection with the aid of Group Sparsity Non-overlapping function provides more detailed information of features. Using Support Vector Model in the second phase, the DFIR method based on Spiral Basis Function effectively ranks the diabetic retinopathy diseases. The ranking of disease level for each candidate set provides a much promising result for developing practically automated diabetic retinopathy diagnosis system. Experimental work on digital fundus images using the DFIR method performs research on the factors such as sensitivity, specificity rate, ranking efficiency and feature selection time. PMID:25974230
Pryor, Alan; Ophus, Colin; Miao, Jianwei
2017-10-25
Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. In this paper, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditionalmore » multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic, using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic.« less
Pryor, Alan; Ophus, Colin; Miao, Jianwei
2017-01-01
Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. Here, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditional multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic , using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, Alan; Ophus, Colin; Miao, Jianwei
Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. In this paper, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditionalmore » multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic, using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic.« less
Adapting the eButton to the abilities of children for diet assessment
USDA-ARS?s Scientific Manuscript database
Dietary assessment is fraught with error among adults and especially among children. Innovative technology may provide more accurate assessments of dietary intake. One recently available innovative method is a camera worn on the chest (called an eButton) that takes images of whatever is in front of ...
USDA-ARS?s Scientific Manuscript database
Market demands for cotton varieties with improved fiber properties also call for the development of fast, reliable analytical methods for monitoring fiber development and measuring their properties. Currently, cotton breeders rely on instrumentation that can require significant amounts of sample, w...
Deeply learnt hashing forests for content based image retrieval in prostate MR images
NASA Astrophysics Data System (ADS)
Shah, Amit; Conjeti, Sailesh; Navab, Nassir; Katouzian, Amin
2016-03-01
Deluge in the size and heterogeneity of medical image databases necessitates the need for content based retrieval systems for their efficient organization. In this paper, we propose such a system to retrieve prostate MR images which share similarities in appearance and content with a query image. We introduce deeply learnt hashing forests (DL-HF) for this image retrieval task. DL-HF effectively leverages the semantic descriptiveness of deep learnt Convolutional Neural Networks. This is used in conjunction with hashing forests which are unsupervised random forests. DL-HF hierarchically parses the deep-learnt feature space to encode subspaces with compact binary code words. We propose a similarity preserving feature descriptor called Parts Histogram which is derived from DL-HF. Correlation defined on this descriptor is used as a similarity metric for retrieval from the database. Validations on publicly available multi-center prostate MR image database established the validity of the proposed approach. The proposed method is fully-automated without any user-interaction and is not dependent on any external image standardization like image normalization and registration. This image retrieval method is generalizable and is well-suited for retrieval in heterogeneous databases other imaging modalities and anatomies.
Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors
NASA Astrophysics Data System (ADS)
Han, Ling
Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in this work. Mounted on a castered counter-weighted clinical cart, the camera also features portable and mobile capabilities for easy handling and on-site applications at remote locations where hospital facilities are not available.
NASA Astrophysics Data System (ADS)
Su, Tengfei
2018-04-01
In this paper, an unsupervised evaluation scheme for remote sensing image segmentation is developed. Based on a method called under- and over-segmentation aware (UOA), the new approach is improved by overcoming the defect in the part of estimating over-segmentation error. Two cases of such error-prone defect are listed, and edge strength is employed to devise a solution to this issue. Two subsets of high resolution remote sensing images were used to test the proposed algorithm, and the experimental results indicate its superior performance, which is attributed to its improved OSE detection model.
Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET
NASA Astrophysics Data System (ADS)
Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan
2016-02-01
Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.
Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images
Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu
2013-01-01
With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856
Nuclear norm-based 2-DPCA for extracting features from images.
Zhang, Fanlong; Yang, Jian; Qian, Jianjun; Xu, Yong
2015-10-01
The 2-D principal component analysis (2-DPCA) is a widely used method for image feature extraction. However, it can be equivalently implemented via image-row-based principal component analysis. This paper presents a structured 2-D method called nuclear norm-based 2-DPCA (N-2-DPCA), which uses a nuclear norm-based reconstruction error criterion. The nuclear norm is a matrix norm, which can provide a structured 2-D characterization for the reconstruction error image. The reconstruction error criterion is minimized by converting the nuclear norm-based optimization problem into a series of F-norm-based optimization problems. In addition, N-2-DPCA is extended to a bilateral projection-based N-2-DPCA (N-B2-DPCA). The virtue of N-B2-DPCA over N-2-DPCA is that an image can be represented with fewer coefficients. N-2-DPCA and N-B2-DPCA are applied to face recognition and reconstruction and evaluated using the Extended Yale B, CMU PIE, FRGC, and AR databases. Experimental results demonstrate the effectiveness of the proposed methods.
Image Augmentation for Object Image Classification Based On Combination of Pre-Trained CNN and SVM
NASA Astrophysics Data System (ADS)
Shima, Yoshihiro
2018-04-01
Neural networks are a powerful means of classifying object images. The proposed image category classification method for object images combines convolutional neural networks (CNNs) and support vector machines (SVMs). A pre-trained CNN, called Alex-Net, is used as a pattern-feature extractor. Alex-Net is pre-trained for the large-scale object-image dataset ImageNet. Instead of training, Alex-Net, pre-trained for ImageNet is used. An SVM is used as trainable classifier. The feature vectors are passed to the SVM from Alex-Net. The STL-10 dataset are used as object images. The number of classes is ten. Training and test samples are clearly split. STL-10 object images are trained by the SVM with data augmentation. We use the pattern transformation method with the cosine function. We also apply some augmentation method such as rotation, skewing and elastic distortion. By using the cosine function, the original patterns were left-justified, right-justified, top-justified, or bottom-justified. Patterns were also center-justified and enlarged. Test error rate is decreased by 0.435 percentage points from 16.055% by augmentation with cosine transformation. Error rates are increased by other augmentation method such as rotation, skewing and elastic distortion, compared without augmentation. Number of augmented data is 30 times that of the original STL-10 5K training samples. Experimental test error rate for the test 8k STL-10 object images was 15.620%, which shows that image augmentation is effective for image category classification.
NASA Astrophysics Data System (ADS)
Ullah, Kaleem; Garcia-Camara, Braulio; Habib, Muhammad; Yadav, N. P.; Liu, Xuefeng
2018-07-01
In this work, we report an indirect way to image the Stokes parameters of a sample under test (SUT) with sub-diffraction scattering information. We apply our previously reported technique called parametric indirect microscopic imaging (PIMI) based on a fitting and filtration process to measure the Stokes parameters of a submicron particle. A comparison with a classical Stokes measurement is also shown. By modulating the incident field in a precise way, fitting and filtration process at each pixel of the detector in PIMI make us enable to resolve and sense the scattering information of SUT and map them in terms of the Stokes parameters. We believe that our finding can be very useful in fields like singular optics, optical nanoantenna, biomedicine and much more. The spatial signature of the Stokes parameters given by our method has been confirmed with finite difference time domain (FDTD) method.
Decision Making Based on Fuzzy Aggregation Operators for Medical Diagnosis from Dental X-ray images.
Ngan, Tran Thi; Tuan, Tran Manh; Son, Le Hoang; Minh, Nguyen Hai; Dey, Nilanjan
2016-12-01
Medical diagnosis is considered as an important step in dentistry treatment which assists clinicians to give their decision about diseases of a patient. It has been affirmed that the accuracy of medical diagnosis, which is much influenced by the clinicians' experience and knowledge, plays an important role to effective treatment therapies. In this paper, we propose a novel decision making method based on fuzzy aggregation operators for medical diagnosis from dental X-Ray images. It firstly divides a dental X-Ray image into some segments and identified equivalent diseases by a classification method called Affinity Propagation Clustering (APC+). Lastly, the most potential disease is found using fuzzy aggregation operators. The experimental validation on real dental datasets of Hanoi Medical University Hospital, Vietnam showed the superiority of the proposed method against the relevant ones in terms of accuracy.
NASA Astrophysics Data System (ADS)
Lei, Dong; Bai, Pengxiang; Zhu, Feipeng
2018-01-01
Nowadays, acetabulum prosthesis replacement is widely used in clinical medicine. However, there is no efficient way to evaluate the implantation effect of the prosthesis. Based on a modern photomechanics technique called digital image correlation (DIC), the evaluation method of the installation effect of the acetabulum was established during a prosthetic replacement of a hip joint. The DIC method determines strain field by comparing the speckle images between the undeformed sample and the deformed counterpart. Three groups of experiments were carried out to verify the feasibility of the DIC method on the acetabulum installation deformation test. Experimental results indicate that the installation deformation of acetabulum generally includes elastic deformation (corresponding to the principal strain of about 1.2%) and plastic deformation. When the installation angle is ideal, the plastic deformation can be effectively reduced, which could prolong the service life of acetabulum prostheses.
Identification of suitable fundus images using automated quality assessment methods.
Şevik, Uğur; Köse, Cemal; Berber, Tolga; Erdöl, Hidayet
2014-04-01
Retinal image quality assessment (IQA) is a crucial process for automated retinal image analysis systems to obtain an accurate and successful diagnosis of retinal diseases. Consequently, the first step in a good retinal image analysis system is measuring the quality of the input image. We present an approach for finding medically suitable retinal images for retinal diagnosis. We used a three-class grading system that consists of good, bad, and outlier classes. We created a retinal image quality dataset with a total of 216 consecutive images called the Diabetic Retinopathy Image Database. We identified the suitable images within the good images for automatic retinal image analysis systems using a novel method. Subsequently, we evaluated our retinal image suitability approach using the Digital Retinal Images for Vessel Extraction and Standard Diabetic Retinopathy Database Calibration level 1 public datasets. The results were measured through the F1 metric, which is a harmonic mean of precision and recall metrics. The highest F1 scores of the IQA tests were 99.60%, 96.50%, and 85.00% for good, bad, and outlier classes, respectively. Additionally, the accuracy of our suitable image detection approach was 98.08%. Our approach can be integrated into any automatic retinal analysis system with sufficient performance scores.
Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging
Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528
Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.
Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-05-01
Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.
Salient object detection based on discriminative boundary and multiple cues integration
NASA Astrophysics Data System (ADS)
Jiang, Qingzhu; Wu, Zemin; Tian, Chang; Liu, Tao; Zeng, Mingyong; Hu, Lei
2016-01-01
In recent years, many saliency models have achieved good performance by taking the image boundary as the background prior. However, if all boundaries of an image are equally and artificially selected as background, misjudgment may happen when the object touches the boundary. We propose an algorithm called weighted contrast optimization based on discriminative boundary (wCODB). First, a background estimation model is reliably constructed through discriminating each boundary via Hausdorff distance. Second, the background-only weighted contrast is improved by fore-background weighted contrast, which is optimized through weight-adjustable optimization framework. Then to objectively estimate the quality of a saliency map, a simple but effective metric called spatial distribution of saliency map and mean saliency in covered window ratio (MSR) is designed. Finally, in order to further promote the detection result using MSR as the weight, we propose a saliency fusion framework to integrate three other cues-uniqueness, distribution, and coherence from three representative methods into our wCODB model. Extensive experiments on six public datasets demonstrate that our wCODB performs favorably against most of the methods based on boundary, and the integrated result outperforms all state-of-the-art methods.
Disjunctive Normal Shape and Appearance Priors with Applications to Image Segmentation.
Mesadi, Fitsum; Cetin, Mujdat; Tasdizen, Tolga
2015-10-01
The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. Active shape and appearance models require landmark points and assume unimodal shape and appearance distributions. Level set based shape priors are limited to global shape similarity. In this paper, we present a novel shape and appearance priors for image segmentation based on an implicit parametric shape representation called disjunctive normal shape model (DNSM). DNSM is formed by disjunction of conjunctions of half-spaces defined by discriminants. We learn shape and appearance statistics at varying spatial scales using nonparametric density estimation. Our method can generate a rich set of shape variations by locally combining training shapes. Additionally, by studying the intensity and texture statistics around each discriminant of our shape model, we construct a local appearance probability map. Experiments carried out on both medical and natural image datasets show the potential of the proposed method.
A Bayesian Nonparametric Approach to Image Super-Resolution.
Polatkan, Gungor; Zhou, Mingyuan; Carin, Lawrence; Blei, David; Daubechies, Ingrid
2015-02-01
Super-resolution methods form high-resolution images from low-resolution images. In this paper, we develop a new Bayesian nonparametric model for super-resolution. Our method uses a beta-Bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. Because it is nonparametric, the number of elements found is also determined from the data. We test the results on both benchmark and natural images, comparing with several other models from the research literature. We perform large-scale human evaluation experiments to assess the visual quality of the results. In a first implementation, we use Gibbs sampling to approximate the posterior. However, this algorithm is not feasible for large-scale data. To circumvent this, we then develop an online variational Bayes (VB) algorithm. This algorithm finds high quality dictionaries in a fraction of the time needed by the Gibbs sampler.
Image analysis for microelectronic retinal prosthesis.
Hallum, L E; Cloherty, S L; Lovell, N H
2008-01-01
By way of extracellular, stimulating electrodes, a microelectronic retinal prosthesis aims to render discrete, luminous spots-so-called phosphenes-in the visual field, thereby providing a phosphene image (PI) as a rudimentary remediation of profound blindness. As part thereof, a digital camera, or some other photosensitive array, captures frames, frames are analyzed, and phosphenes are actuated accordingly by way of modulated charge injections. Here, we present a method that allows the assessment of image analysis schemes for integration with a prosthetic device, that is, the means of converting the captured image (high resolution) to modulated charge injections (low resolution). We use the mutual-information function to quantify the amount of information conveyed to the PI observer (device implantee), while accounting for the statistics of visual stimuli. We demonstrate an effective scheme involving overlapping, Gaussian kernels, and discuss extensions of the method to account for shortterm visual memory in observers, and their perceptual errors of omission and commission.
Jeon, Gwanggil; Dubois, Eric
2013-01-01
This paper adapts the least-squares luma-chroma demultiplexing (LSLCD) demosaicking method to noisy Bayer color filter array (CFA) images. A model is presented for the noise in white-balanced gamma-corrected CFA images. A method to estimate the noise level in each of the red, green, and blue color channels is then developed. Based on the estimated noise parameters, one of a finite set of configurations adapted to a particular level of noise is selected to demosaic the noisy data. The noise-adaptive demosaicking scheme is called LSLCD with noise estimation (LSLCD-NE). Experimental results demonstrate state-of-the-art performance over a wide range of noise levels, with low computational complexity. Many results with several algorithms, noise levels, and images are presented on our companion web site along with software to allow reproduction of our results.
NASA Astrophysics Data System (ADS)
Wan, Qianwen; Panetta, Karen; Agaian, Sos
2017-05-01
Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.
Image encryption using random sequence generated from generalized information domain
NASA Astrophysics Data System (ADS)
Xia-Yan, Zhang; Guo-Ji, Zhang; Xuan, Li; Ya-Zhou, Ren; Jie-Hua, Wu
2016-05-01
A novel image encryption method based on the random sequence generated from the generalized information domain and permutation-diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security.
Extending the Li&Ma method to include PSF information
NASA Astrophysics Data System (ADS)
Nievas-Rosillo, M.; Contreras, J. L.
2016-02-01
The so called Li&Ma formula is still the most frequently used method for estimating the significance of observations carried out by Imaging Atmospheric Cherenkov Telescopes. In this work a straightforward extension of the method for point sources that profits from the good imaging capabilities of current instruments is proposed. It is based on a likelihood ratio under the assumption of a well-known PSF and a smooth background. Its performance is tested with Monte Carlo simulations based on real observations and its sensitivity is compared to standard methods which do not incorporate PSF information. The gain of significance that can be attributed to the inclusion of the PSF is around 10% and can be boosted if a background model is assumed or a finer binning is used.
Wavelet-space Correlation Imaging for High-speed MRI without Motion Monitoring or Data Segmentation
Li, Yu; Wang, Hui; Tkach, Jean; Roach, David; Woods, Jason; Dumoulin, Charles
2014-01-01
Purpose This study aims to 1) develop a new high-speed MRI approach by implementing correlation imaging in wavelet-space, and 2) demonstrate the ability of wavelet-space correlation imaging to image human anatomy with involuntary or physiological motion. Methods Correlation imaging is a high-speed MRI framework in which image reconstruction relies on quantification of data correlation. The presented work integrates correlation imaging with a wavelet transform technique developed originally in the field of signal and image processing. This provides a new high-speed MRI approach to motion-free data collection without motion monitoring or data segmentation. The new approach, called “wavelet-space correlation imaging”, is investigated in brain imaging with involuntary motion and chest imaging with free-breathing. Results Wavelet-space correlation imaging can exceed the speed limit of conventional parallel imaging methods. Using this approach with high acceleration factors (6 for brain MRI, 16 for cardiac MRI and 8 for lung MRI), motion-free images can be generated in static brain MRI with involuntary motion and nonsegmented dynamic cardiac/lung MRI with free-breathing. Conclusion Wavelet-space correlation imaging enables high-speed MRI in the presence of involuntary motion or physiological dynamics without motion monitoring or data segmentation. PMID:25470230
Local multifractal detrended fluctuation analysis for non-stationary image's texture segmentation
NASA Astrophysics Data System (ADS)
Wang, Fang; Li, Zong-shou; Li, Jin-wei
2014-12-01
Feature extraction plays a great important role in image processing and pattern recognition. As a power tool, multifractal theory is recently employed for this job. However, traditional multifractal methods are proposed to analyze the objects with stationary measure and cannot for non-stationary measure. The works of this paper is twofold. First, the definition of stationary image and 2D image feature detection methods are proposed. Second, a novel feature extraction scheme for non-stationary image is proposed by local multifractal detrended fluctuation analysis (Local MF-DFA), which is based on 2D MF-DFA. A set of new multifractal descriptors, called local generalized Hurst exponent (Lhq) is defined to characterize the local scaling properties of textures. To test the proposed method, both the novel texture descriptor and other two multifractal indicators, namely, local Hölder coefficients based on capacity measure and multifractal dimension Dq based on multifractal differential box-counting (MDBC) method, are compared in segmentation experiments. The first experiment indicates that the segmentation results obtained by the proposed Lhq are better than the MDBC-based Dq slightly and superior to the local Hölder coefficients significantly. The results in the second experiment demonstrate that the Lhq can distinguish the texture images more effectively and provide more robust segmentations than the MDBC-based Dq significantly.
Fast myopic 2D-SIM super resolution microscopy with joint modulation pattern estimation
NASA Astrophysics Data System (ADS)
Orieux, François; Loriette, Vincent; Olivo-Marin, Jean-Christophe; Sepulveda, Eduardo; Fragola, Alexandra
2017-12-01
Super-resolution in structured illumination microscopy (SIM) is obtained through de-aliasing of modulated raw images, in which high frequencies are measured indirectly inside the optical transfer function. Usual approaches that use 9 or 15 images are often too slow for dynamic studies. Moreover, as experimental conditions change with time, modulation parameters must be estimated within the images. This paper tackles the problem of image reconstruction for fast super resolution in SIM, where the number of available raw images is reduced to four instead of nine or fifteen. Within an optimization framework, the solution is inferred via a joint myopic criterion for image and modulation (or acquisition) parameters, leading to what is frequently called a myopic or semi-blind inversion problem. The estimate is chosen as the minimizer of the nonlinear criterion, numerically calculated by means of a block coordinate optimization algorithm. The effectiveness of the proposed method is demonstrated for simulated and experimental examples. The results show precise estimation of the modulation parameters jointly with the reconstruction of the super resolution image. The method also shows its effectiveness for thick biological samples.
Banerjee, Abhirup; Maji, Pradipta
2015-12-01
The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.
Salehpour, Mehdi; Behrad, Alireza
2017-10-01
This study proposes a new algorithm for nonrigid coregistration of synthetic aperture radar (SAR) and optical images. The proposed algorithm employs point features extracted by the binary robust invariant scalable keypoints algorithm and a new method called weighted bidirectional matching for initial correspondence. To refine false matches, we assume that the transformation between SAR and optical images is locally rigid. This property is used to refine false matches by assigning scores to matched pairs and clustering local rigid transformations using a two-layer Kohonen network. Finally, the thin plate spline algorithm and mutual information are used for nonrigid coregistration of SAR and optical images.
NASA Technical Reports Server (NTRS)
Appleton, P. N.; Siqueira, P. R.; Basart, J. P.
1993-01-01
The presence of diffuse extended IR emission from the Galaxy in the form of the so called 'Galactic Cirrus' emission has hampered the exploration of the extragalactic sky at long IR wavelengths. We describe the development of a filter based on mathematical morphology which appears to be a promising approach to the problem of cirrus removal. The method of Greyscale Morphology was applied to a 100 micron IRAS image of the M81 group of galaxies. This is an extragalactic field which suffers from serious contamination from foreground Galactic 'cirrus'. Using a technique called 'sieving', it was found that the cirrus emission has a characteristic behavior which can be quantified in terms of an average spatial structure spectrum or growth function. This function was then used to attempt to remove 'cirrus' from the entire image. The result was a significant reduction of cirrus emission by an intensity factor of 15 compared with the original input image. The method appears to preserve extended emission in the spatially extended IR disks of M81 and M82 as well as distinguishing fainter galaxies within bright regions of galactic cirrus. The techniques may also be applicable to IR databases obtained with the Cosmic Background Explorer.
NASA Technical Reports Server (NTRS)
Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)
2001-01-01
The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.
Enhancing SDO/HMI images using deep learning
NASA Astrophysics Data System (ADS)
Baso, C. J. Díaz; Ramos, A. Asensio
2018-06-01
Context. The Helioseismic and Magnetic Imager (HMI) provides continuum images and magnetograms with a cadence better than one per minute. It has been continuously observing the Sun 24 h a day for the past 7 yr. The trade-off between full disk observations and spatial resolution means that HMI is not adequate for analyzing the smallest-scale events in the solar atmosphere. Aims: Our aim is to develop a new method to enhance HMI data, simultaneously deconvolving and super-resolving images and magnetograms. The resulting images will mimic observations with a diffraction-limited telescope twice the diameter of HMI. Methods: Our method, which we call Enhance, is based on two deep, fully convolutional neural networks that input patches of HMI observations and output deconvolved and super-resolved data. The neural networks are trained on synthetic data obtained from simulations of the emergence of solar active regions. Results: We have obtained deconvolved and super-resolved HMI images. To solve this ill-defined problem with infinite solutions we have used a neural network approach to add prior information from the simulations. We test Enhance against Hinode data that has been degraded to a 28 cm diameter telescope showing very good consistency. The code is open source.
NASA Astrophysics Data System (ADS)
Birkfellner, Wolfgang; Seemann, Rudolf; Figl, Michael; Hummel, Johann; Ede, Christopher; Homolka, Peter; Yang, Xinhui; Niederer, Peter; Bergmann, Helmar
2005-05-01
3D/2D registration, the automatic assignment of a global rigid-body transformation matching the coordinate systems of patient and preoperative volume scan using projection images, is an important topic in image-guided therapy and radiation oncology. A crucial part of most 3D/2D registration algorithms is the fast computation of digitally rendered radiographs (DRRs) to be compared iteratively to radiographs or portal images. Since registration is an iterative process, fast generation of DRRs—which are perspective summed voxel renderings—is desired. In this note, we present a simple and rapid method for generation of DRRs based on splat rendering. As opposed to conventional splatting, antialiasing of the resulting images is not achieved by means of computing a discrete point spread function (a so-called footprint), but by stochastic distortion of either the voxel positions in the volume scan or by the simulation of a focal spot of the x-ray tube with non-zero diameter. Our method generates slightly blurred DRRs suitable for registration purposes at framerates of approximately 10 Hz when rendering volume images with a size of 30 MB.
Multitask visual learning using genetic programming.
Jaśkowski, Wojciech; Krawiec, Krzysztof; Wieloch, Bartosz
2008-01-01
We propose a multitask learning method of visual concepts within the genetic programming (GP) framework. Each GP individual is composed of several trees that process visual primitives derived from input images. Two trees solve two different visual tasks and are allowed to share knowledge with each other by commonly calling the remaining GP trees (subfunctions) included in the same individual. The performance of a particular tree is measured by its ability to reproduce the shapes contained in the training images. We apply this method to visual learning tasks of recognizing simple shapes and compare it to a reference method. The experimental verification demonstrates that such multitask learning often leads to performance improvements in one or both solved tasks, without extra computational effort.
Learnable despeckling framework for optical coherence tomography images
NASA Astrophysics Data System (ADS)
Adabi, Saba; Rashedi, Elaheh; Clayton, Anne; Mohebbi-Kalkhoran, Hamed; Chen, Xue-wen; Conforto, Silvia; Nasiriavanaki, Mohammadreza
2018-01-01
Optical coherence tomography (OCT) is a prevalent, interferometric, high-resolution imaging method with broad biomedical applications. Nonetheless, OCT images suffer from an artifact called speckle, which degrades the image quality. Digital filters offer an opportunity for image improvement in clinical OCT devices, where hardware modification to enhance images is expensive. To reduce speckle, a wide variety of digital filters have been proposed; selecting the most appropriate filter for an OCT image/image set is a challenging decision, especially in dermatology applications of OCT where a different variety of tissues are imaged. To tackle this challenge, we propose an expandable learnable despeckling framework, we call LDF. LDF decides which speckle reduction algorithm is most effective on a given image by learning a figure of merit (FOM) as a single quantitative image assessment measure. LDF is learnable, which means when implemented on an OCT machine, each given image/image set is retrained and its performance is improved. Also, LDF is expandable, meaning that any despeckling algorithm can easily be added to it. The architecture of LDF includes two main parts: (i) an autoencoder neural network and (ii) filter classifier. The autoencoder learns the FOM based on several quality assessment measures obtained from the OCT image including signal-to-noise ratio, contrast-to-noise ratio, equivalent number of looks, edge preservation index, and mean structural similarity index. Subsequently, the filter classifier identifies the most efficient filter from the following categories: (a) sliding window filters including median, mean, and symmetric nearest neighborhood, (b) adaptive statistical-based filters including Wiener, homomorphic Lee, and Kuwahara, and (c) edge preserved patch or pixel correlation-based filters including nonlocal mean, total variation, and block matching three-dimensional filtering.
Bag-of-features based medical image retrieval via multiple assignment and visual words weighting.
Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin
2011-11-01
Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights.
Accurate estimation of motion blur parameters in noisy remote sensing image
NASA Astrophysics Data System (ADS)
Shi, Xueyan; Wang, Lin; Shao, Xiaopeng; Wang, Huilin; Tao, Zhong
2015-05-01
The relative motion between remote sensing satellite sensor and objects is one of the most common reasons for remote sensing image degradation. It seriously weakens image data interpretation and information extraction. In practice, point spread function (PSF) should be estimated firstly for image restoration. Identifying motion blur direction and length accurately is very crucial for PSF and restoring image with precision. In general, the regular light-and-dark stripes in the spectrum can be employed to obtain the parameters by using Radon transform. However, serious noise existing in actual remote sensing images often causes the stripes unobvious. The parameters would be difficult to calculate and the error of the result relatively big. In this paper, an improved motion blur parameter identification method to noisy remote sensing image is proposed to solve this problem. The spectrum characteristic of noisy remote sensing image is analyzed firstly. An interactive image segmentation method based on graph theory called GrabCut is adopted to effectively extract the edge of the light center in the spectrum. Motion blur direction is estimated by applying Radon transform on the segmentation result. In order to reduce random error, a method based on whole column statistics is used during calculating blur length. Finally, Lucy-Richardson algorithm is applied to restore the remote sensing images of the moon after estimating blur parameters. The experimental results verify the effectiveness and robustness of our algorithm.
Mansoor, Awais; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z.; Folio, Les R.; Udupa, Jayaram K.; Mollura, Daniel J.
2015-01-01
The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy–guided, and (e) machine learning–based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. ©RSNA, 2015 PMID:26172351
Measuring food intake with digital photography
Martin, Corby K.; Nicklas, Theresa; Gunturk, Bahadir; Correa, John B.; Allen, H. Raymond; Champagne, Catherine
2014-01-01
The Digital Photography of Foods Method accurately estimates the food intake of adults and children in cafeterias. When using this method, imags of food selection and leftovers are quickly captured in the cafeteria. These images are later compared to images of “standard” portions of food using a computer application. The amount of food selected and discarded is estimated based upon this comparison, and the application automatically calculates energy and nutrient intake. Herein, we describe this method, as well as a related method called the Remote Food Photography Method (RFPM), which relies on Smartphones to estimate food intake in near real-time in free-living conditions. When using the RFPM, participants capture images of food selection and leftovers using a Smartphone and these images are wirelessly transmitted in near real-time to a server for analysis. Because data are transferred and analyzed in near real-time, the RFPM provides a platform for participants to quickly receive feedback about their food intake behavior and to receive dietary recommendations to achieve weight loss and health promotion goals. The reliability and validity of measuring food intake with the RFPM in adults and children will also be reviewed. The body of research reviewed herein demonstrates that digital imaging accurately estimates food intake in many environments and it has many advantages over other methods, including reduced participant burden, elimination of the need for participants to estimate portion size, and incorporation of computer automation to improve the accuracy, efficiency, and the cost-effectiveness of the method. PMID:23848588
Alomari, Yazan M.; MdZin, Reena Rahayu
2015-01-01
Analysis of whole-slide tissue for digital pathology images has been clinically approved to provide a second opinion to pathologists. Localization of focus points from Ki-67-stained histopathology whole-slide tissue microscopic images is considered the first step in the process of proliferation rate estimation. Pathologists use eye pooling or eagle-view techniques to localize the highly stained cell-concentrated regions from the whole slide under microscope, which is called focus-point regions. This procedure leads to a high variety of interpersonal observations and time consuming, tedious work and causes inaccurate findings. The localization of focus-point regions can be addressed as a clustering problem. This paper aims to automate the localization of focus-point regions from whole-slide images using the random patch probabilistic density method. Unlike other clustering methods, random patch probabilistic density method can adaptively localize focus-point regions without predetermining the number of clusters. The proposed method was compared with the k-means and fuzzy c-means clustering methods. Our proposed method achieves a good performance, when the results were evaluated by three expert pathologists. The proposed method achieves an average false-positive rate of 0.84% for the focus-point region localization error. Moreover, regarding RPPD used to localize tissue from whole-slide images, 228 whole-slide images have been tested; 97.3% localization accuracy was achieved. PMID:25793010
Fast template matching with polynomials.
Omachi, Shinichiro; Omachi, Masako
2007-08-01
Template matching is widely used for many applications in image and signal processing. This paper proposes a novel template matching algorithm, called algebraic template matching. Given a template and an input image, algebraic template matching efficiently calculates similarities between the template and the partial images of the input image, for various widths and heights. The partial image most similar to the template image is detected from the input image for any location, width, and height. In the proposed algorithm, a polynomial that approximates the template image is used to match the input image instead of the template image. The proposed algorithm is effective especially when the width and height of the template image differ from the partial image to be matched. An algorithm using the Legendre polynomial is proposed for efficient approximation of the template image. This algorithm not only reduces computational costs, but also improves the quality of the approximated image. It is shown theoretically and experimentally that the computational cost of the proposed algorithm is much smaller than the existing methods.
Ma, Jieshi; Xu, Canhua; Dai, Meng; You, Fusheng; Shi, Xuetao; Dong, Xiuzhen; Fu, Feng
2014-01-01
Stroke has a high mortality and disability rate and should be rapidly diagnosed to improve prognosis. Diagnosing stroke is not a problem for hospitals with CT, MRI, and other imaging devices but is difficult for community hospitals without these devices. Based on the mechanism that the electrical impedance of the two hemispheres of a normal human head is basically symmetrical and a stroke can alter this symmetry, a fast electrical impedance imaging method called symmetrical electrical impedance tomography (SEIT) is proposed. In this technique, electrical impedance tomography (EIT) data measured from the undamaged craniocerebral hemisphere (CCH) is regarded as reference data for the remaining EIT data measured from the other CCH for difference imaging to identify the differences in resistivity distribution between the two CCHs. The results of SEIT imaging based on simulation data from the 2D human head finite element model and that from the physical phantom of human head verified this method in detection of unilateral stroke.
Xu, Canhua; Dai, Meng; You, Fusheng; Shi, Xuetao
2014-01-01
Stroke has a high mortality and disability rate and should be rapidly diagnosed to improve prognosis. Diagnosing stroke is not a problem for hospitals with CT, MRI, and other imaging devices but is difficult for community hospitals without these devices. Based on the mechanism that the electrical impedance of the two hemispheres of a normal human head is basically symmetrical and a stroke can alter this symmetry, a fast electrical impedance imaging method called symmetrical electrical impedance tomography (SEIT) is proposed. In this technique, electrical impedance tomography (EIT) data measured from the undamaged craniocerebral hemisphere (CCH) is regarded as reference data for the remaining EIT data measured from the other CCH for difference imaging to identify the differences in resistivity distribution between the two CCHs. The results of SEIT imaging based on simulation data from the 2D human head finite element model and that from the physical phantom of human head verified this method in detection of unilateral stroke. PMID:25006594
MARTA GANs: Unsupervised Representation Learning for Remote Sensing Image Classification
NASA Astrophysics Data System (ADS)
Lin, Daoyu; Fu, Kun; Wang, Yang; Xu, Guangluan; Sun, Xian
2017-11-01
With the development of deep learning, supervised learning has frequently been adopted to classify remotely sensed images using convolutional networks (CNNs). However, due to the limited amount of labeled data available, supervised learning is often difficult to carry out. Therefore, we proposed an unsupervised model called multiple-layer feature-matching generative adversarial networks (MARTA GANs) to learn a representation using only unlabeled data. MARTA GANs consists of both a generative model $G$ and a discriminative model $D$. We treat $D$ as a feature extractor. To fit the complex properties of remote sensing data, we use a fusion layer to merge the mid-level and global features. $G$ can produce numerous images that are similar to the training data; therefore, $D$ can learn better representations of remotely sensed images using the training data provided by $G$. The classification results on two widely used remote sensing image databases show that the proposed method significantly improves the classification performance compared with other state-of-the-art methods.
A smart technique for attendance system to recognize faces through parallelism
NASA Astrophysics Data System (ADS)
Prabhavathi, B.; Tanuja, V.; Madhu Viswanatham, V.; Rajashekhara Babu, M.
2017-11-01
Major part of recognising a person is face with the help of image processing techniques we can exploit the physical features of a person. In the old approach method that is used in schools and colleges it is there that the professor calls the student name and then the attendance for the students marked. Here in paper want to deviate from the old approach and go with the new approach by using techniques that are there in image processing. In this paper we presenting spontaneous presence for students in classroom. At first classroom image has been in use and after that image is kept in data record. For the images that are stored in the database we apply system algorithm which includes steps such as, histogram classification, noise removal, face detection and face recognition methods. So by using these steps we detect the faces and then compare it with the database. The attendance gets marked automatically if the system recognizes the faces.
Nonlinear features for classification and pose estimation of machined parts from single views
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-10-01
A new nonlinear feature extraction method is presented for classification and pose estimation of objects from single views. The feature extraction method is called the maximum representation and discrimination feature (MRDF) method. The nonlinear MRDF transformations to use are obtained in closed form, and offer significant advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We consider MRDFs on image data, provide a new 2-stage nonlinear MRDF solution, and show it specializes to well-known linear and nonlinear image processing transforms under certain conditions. We show the use of MRDF in estimating the class and pose of images of rendered solid CAD models of machine parts from single views using a feature-space trajectory neural network classifier. We show new results with better classification and pose estimation accuracy than are achieved by standard principal component analysis and Fukunaga-Koontz feature extraction methods.
An Accurate Framework for Arbitrary View Pedestrian Detection in Images
NASA Astrophysics Data System (ADS)
Fan, Y.; Wen, G.; Qiu, S.
2018-01-01
We consider the problem of detect pedestrian under from images collected under various viewpoints. This paper utilizes a novel framework called locality-constrained affine subspace coding (LASC). Firstly, the positive training samples are clustered into similar entities which represent similar viewpoint. Then Principal Component Analysis (PCA) is used to obtain the shared feature of each viewpoint. Finally, the samples that can be reconstructed by linear approximation using their top- k nearest shared feature with a small error are regarded as a correct detection. No negative samples are required for our method. Histograms of orientated gradient (HOG) features are used as the feature descriptors, and the sliding window scheme is adopted to detect humans in images. The proposed method exploits the sparse property of intrinsic information and the correlations among the multiple-views samples. Experimental results on the INRIA and SDL human datasets show that the proposed method achieves a higher performance than the state-of-the-art methods in form of effect and efficiency.
Multiview Locally Linear Embedding for Effective Medical Image Retrieval
Shen, Hualei; Tao, Dacheng; Ma, Dianfu
2013-01-01
Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277
Fuzzy object models for newborn brain MR image segmentation
NASA Astrophysics Data System (ADS)
Kobashi, Syoji; Udupa, Jayaram K.
2013-03-01
Newborn brain MR image segmentation is a challenging problem because of variety of size, shape and MR signal although it is the fundamental study for quantitative radiology in brain MR images. Because of the large difference between the adult brain and the newborn brain, it is difficult to directly apply the conventional methods for the newborn brain. Inspired by the original fuzzy object model introduced by Udupa et al. at SPIE Medical Imaging 2011, called fuzzy shape object model (FSOM) here, this paper introduces fuzzy intensity object model (FIOM), and proposes a new image segmentation method which combines the FSOM and FIOM into fuzzy connected (FC) image segmentation. The fuzzy object models are built from training datasets in which the cerebral parenchyma is delineated by experts. After registering FSOM with the evaluating image, the proposed method roughly recognizes the cerebral parenchyma region based on a prior knowledge of location, shape, and the MR signal given by the registered FSOM and FIOM. Then, FC image segmentation delineates the cerebral parenchyma using the fuzzy object models. The proposed method has been evaluated using 9 newborn brain MR images using the leave-one-out strategy. The revised age was between -1 and 2 months. Quantitative evaluation using false positive volume fraction (FPVF) and false negative volume fraction (FNVF) has been conducted. Using the evaluation data, a FPVF of 0.75% and FNVF of 3.75% were achieved. More data collection and testing are underway.
Lin, Dongyun; Sun, Lei; Toh, Kar-Ann; Zhang, Jing Bo; Lin, Zhiping
2018-05-01
Automated biomedical image classification could confront the challenges of high level noise, image blur, illumination variation and complicated geometric correspondence among various categorical biomedical patterns in practice. To handle these challenges, we propose a cascade method consisting of two stages for biomedical image classification. At stage 1, we propose a confidence score based classification rule with a reject option for a preliminary decision using the support vector machine (SVM). The testing images going through stage 1 are separated into two groups based on their confidence scores. Those testing images with sufficiently high confidence scores are classified at stage 1 while the others with low confidence scores are rejected and fed to stage 2. At stage 2, the rejected images from stage 1 are first processed by a subspace analysis technique called eigenfeature regularization and extraction (ERE), and then classified by another SVM trained in the transformed subspace learned by ERE. At both stages, images are represented based on two types of local features, i.e., SIFT and SURF, respectively. They are encoded using various bag-of-words (BoW) models to handle biomedical patterns with and without geometric correspondence, respectively. Extensive experiments are implemented to evaluate the proposed method on three benchmark real-world biomedical image datasets. The proposed method significantly outperforms several competing state-of-the-art methods in terms of classification accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hou, Bin; Wang, Yunhong; Liu, Qingjie
2016-01-01
Characterizations of up to date information of the Earth’s surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD) methods have been developed to solve them by utilizing remote sensing (RS) images. The advent of high resolution (HR) remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs) allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC) segmentation. Then, saliency and morphological building index (MBI) extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF). Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation. PMID:27618903
Hou, Bin; Wang, Yunhong; Liu, Qingjie
2016-08-27
Characterizations of up to date information of the Earth's surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD) methods have been developed to solve them by utilizing remote sensing (RS) images. The advent of high resolution (HR) remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs) allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC) segmentation. Then, saliency and morphological building index (MBI) extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF). Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation.
Joucla, Sébastien; Franconville, Romain; Pippow, Andreas; Kloppenburg, Peter; Pouzat, Christophe
2013-08-01
Calcium imaging has become a routine technique in neuroscience for subcellular to network level investigations. The fast progresses in the development of new indicators and imaging techniques call for dedicated reliable analysis methods. In particular, efficient and quantitative background fluorescence subtraction routines would be beneficial to most of the calcium imaging research field. A background-subtracted fluorescence transients estimation method that does not require any independent background measurement is therefore developed. This method is based on a fluorescence model fitted to single-trial data using a classical nonlinear regression approach. The model includes an appropriate probabilistic description of the acquisition system's noise leading to accurate confidence intervals on all quantities of interest (background fluorescence, normalized background-subtracted fluorescence time course) when background fluorescence is homogeneous. An automatic procedure detecting background inhomogeneities inside the region of interest is also developed and is shown to be efficient on simulated data. The implementation and performances of the proposed method on experimental recordings from the mouse hypothalamus are presented in details. This method, which applies to both single-cell and bulk-stained tissues recordings, should help improving the statistical comparison of fluorescence calcium signals between experiments and studies. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Huesca, Robert
The participatory method of image production holds enormous potential for communication and journalism scholars operating out of a critical/cultural framework. The methodological potentials of mechanical reproduction were evident in the 1930s, when Walter Benjamin contributed three enduring concepts: questioning the art/document dichotomy; placing…
The Economics of Managed Print and Imaging Services
2011-06-01
process are called poka - yokes , which are methods to prevent mistakes. This combination of controls is designed to make a system foolproof because it...must be reformatted prior to turn-in.” This sticker serves as a poka - yoke , as mentioned in Chapter IV. A breach of PII can also result from
USDA-ARS?s Scientific Manuscript database
An emerging poultry meat quality concern is associated with chicken breast fillets having an uncharacteristically hard or rigid feel (called the wooden breast condition). The cause of the wooden breast condition is still largely unknown, and there is no single objective evaluation method or system k...
Iliotibial band friction syndrome
2010-01-01
Published articles on iliotibial band friction syndrome have been reviewed. These articles cover the epidemiology, etiology, anatomy, pathology, prevention, and treatment of the condition. This article describes (1) the various etiological models that have been proposed to explain iliotibial band friction syndrome; (2) some of the imaging methods, research studies, and clinical experiences that support or call into question these various models; (3) commonly proposed treatment methods for iliotibial band friction syndrome; and (4) the rationale behind these methods and the clinical outcome studies that support their efficacy. PMID:21063495
Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification.
Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried; De Vos, Winnok H
2017-01-01
A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows.
Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification
Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried
2017-01-01
A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows. PMID:28125723
Khalil, Hossam; Kim, Dongkyu; Jo, Youngjoon; Park, Kyihwan
2017-06-01
An optical component called a Dove prism is used to rotate the laser beam of a laser-scanning vibrometer (LSV). This is called a derotator and is used for measuring the vibration of rotating objects. The main advantage of a derotator is that it works independently from an LSV. However, this device requires very specific alignment, in which the axis of the Dove prism must coincide with the rotational axis of the object. If the derotator is misaligned with the rotating object, the results of the vibration measurement are imprecise, owing to the alteration of the laser beam on the surface of the rotating object. In this study, a method is proposed for aligning a derotator with a rotating object through an image-processing algorithm that obtains the trajectory of a landmark attached to the object. After the trajectory of the landmark is mathematically modeled, the amount of derotator misalignment with respect to the object is calculated. The accuracy of the proposed method for aligning the derotator with the rotating object is experimentally tested.
Soft x-ray holographic tomography for biological specimens
NASA Astrophysics Data System (ADS)
Gao, Hongyi; Chen, Jianwen; Xie, Honglan; Li, Ruxin; Xu, Zhizhan; Jiang, Shiping; Zhang, Yuxuan
2003-10-01
In this paper, we present some experimental results on X -ray holography, holographic tomography, and a new holographic tomography method called pre-amplified holographic tomography is proposed. Due to the shorter wavelength and the larger penetration depths, X-rays provide the potential of higher resolution in imaging techniques, and have the ability to image intact, living, hydrated cells w ithout slicing, dehydration, chemical fixation or stain. Recently, using X-ray source in National Synchrotron Radiation Laboratory in Hefei, we have successfully performed some soft X-ray holography experiments on biological specimen. The specimens used in the experiments was the garlic clove epidermis, we got their X-ray hologram, and then reconstructed them by computer programs, the feature of the cell walls, the nuclei and some cytoplasm were clearly resolved. However, there still exist some problems in realization of practical 3D microscopic imaging due to the near-unity refractive index of the matter. There is no X-ray optics having a sufficient high numerical aperture to achieve a depth resolution that is comparable to the transverse resolution. On the other hand, computer tomography needs a record of hundreds of views of the test object at different angles for high resolution. This is because the number of views required for a densely packed object is equal to the object radius divided by the desired depth resolution. Clearly, it is impractical for a radiation-sensitive biological specimen. Moreover, the X-ray diffraction effect makes projection data blur, this badly degrades the resolution of the reconstructed image. In order to observe 3D structure of the biological specimens, McNulty proposed a new method for 3D imaging called "holographic tomography (HT)" in which several holograms of the specimen are recorded from various illumination directions and combined in the reconstruction step. This permits the specimens to be sampled over a wide range of spatial frequencies to improve the depth resolution. In NSRL, we performed soft X-ray holographic tomography experiments. The specimen was the spider filaments and PM M A as recording medium. By 3D CT reconstruction of the projection data, three dimensional density distribution of the specimen was obtained. Also, we developed a new X-ray holographic tomography m ethod called pre-amplified holographic tomography. The method permits a digital real-time 3D reconstruction with high-resolution and a simple and compact experimental setup as well.
NASA Astrophysics Data System (ADS)
Movia, A.; Beinat, A.; Crosilla, F.
2015-04-01
The recognition of vegetation by the analysis of very high resolution (VHR) aerial images provides meaningful information about environmental features; nevertheless, VHR images frequently contain shadows that generate significant problems for the classification of the image components and for the extraction of the needed information. The aim of this research is to classify, from VHR aerial images, vegetation involved in the balance process of the environmental biochemical cycle, and to discriminate it with respect to urban and agricultural features. Three classification algorithms have been experimented in order to better recognize vegetation, and compared to NDVI index; unfortunately all these methods are conditioned by the presence of shadows on the images. Literature presents several algorithms to detect and remove shadows in the scene: most of them are based on the RGB to HSI transformations. In this work some of them have been implemented and compared with one based on RGB bands. Successively, in order to remove shadows and restore brightness on the images, some innovative algorithms, based on Procrustes theory, have been implemented and applied. Among these, we evaluate the capability of the so called "not-centered oblique Procrustes" and "anisotropic Procrustes" methods to efficiently restore brightness with respect to a linear correlation correction based on the Cholesky decomposition. Some experimental results obtained by different classification methods after shadows removal carried out with the innovative algorithms are presented and discussed.
Classify epithelium-stroma in histopathological images based on deep transferable network.
Yu, X; Zheng, H; Liu, C; Huang, Y; Ding, X
2018-04-20
Recently, the deep learning methods have received more attention in histopathological image analysis. However, the traditional deep learning methods assume that training data and test data have the same distributions, which causes certain limitations in real-world histopathological applications. However, it is costly to recollect a large amount of labeled histology data to train a new neural network for each specified image acquisition procedure even for similar tasks. In this paper, an unsupervised domain adaptation is introduced into a typical deep convolutional neural network (CNN) model to mitigate the repeating of the labels. The unsupervised domain adaptation is implemented by adding two regularisation terms, namely the feature-based adaptation and entropy minimisation, to the object function of a widely used CNN model called the AlexNet. Three independent public epithelium-stroma datasets were used to verify the proposed method. The experimental results have demonstrated that in the epithelium-stroma classification, the proposed method can achieve better performance than the commonly used deep learning methods and some existing deep domain adaptation methods. Therefore, the proposed method can be considered as a better option for the real-world applications of histopathological image analysis because there is no requirement for recollection of large-scale labeled data for every specified domain. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.
Nema, Shubham; Hasan, Whidul; Bhargava, Anamika; Bhargava, Yogesh
2016-09-15
Behavioural neuroscience relies on software driven methods for behavioural assessment, but the field lacks cost-effective, robust, open source software for behavioural analysis. Here we propose a novel method which we called as ZebraTrack. It includes cost-effective imaging setup for distraction-free behavioural acquisition, automated tracking using open-source ImageJ software and workflow for extraction of behavioural endpoints. Our ImageJ algorithm is capable of providing control to users at key steps while maintaining automation in tracking without the need for the installation of external plugins. We have validated this method by testing novelty induced anxiety behaviour in adult zebrafish. Our results, in agreement with established findings, showed that during state-anxiety, zebrafish showed reduced distance travelled, increased thigmotaxis and freezing events. Furthermore, we proposed a method to represent both spatial and temporal distribution of choice-based behaviour which is currently not possible to represent using simple videograms. ZebraTrack method is simple and economical, yet robust enough to give results comparable with those obtained from costly proprietary software like Ethovision XT. We have developed and validated a novel cost-effective method for behavioural analysis of adult zebrafish using open-source ImageJ software. Copyright © 2016 Elsevier B.V. All rights reserved.
A new image segmentation method based on multifractal detrended moving average analysis
NASA Astrophysics Data System (ADS)
Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le
2015-08-01
In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.
Statistical characterization of portal images and noise from portal imaging systems.
González-López, Antonio; Morales-Sánchez, Juan; Verdú-Monedero, Rafael; Larrey-Ruiz, Jorge
2013-06-01
In this paper, we consider the statistical characteristics of the so-called portal images, which are acquired prior to the radiotherapy treatment, as well as the noise that present the portal imaging systems, in order to analyze whether the well-known noise and image features in other image modalities, such as natural image, can be found in the portal imaging modality. The study is carried out in the spatial image domain, in the Fourier domain, and finally in the wavelet domain. The probability density of the noise in the spatial image domain, the power spectral densities of the image and noise, and the marginal, joint, and conditional statistical distributions of the wavelet coefficients are estimated. Moreover, the statistical dependencies between noise and signal are investigated. The obtained results are compared with practical and useful references, like the characteristics of the natural image and the white noise. Finally, we discuss the implication of the results obtained in several noise reduction methods that operate in the wavelet domain.
Impulse Noise Cancellation of Medical Images Using Wavelet Networks and Median Filters
Sadri, Amir Reza; Zekri, Maryam; Sadri, Saeid; Gheissari, Niloofar
2012-01-01
This paper presents a new two-stage approach to impulse noise removal for medical images based on wavelet network (WN). The first step is noise detection, in which the so-called gray-level difference and average background difference are considered as the inputs of a WN. Wavelet Network is used as a preprocessing for the second stage. The second step is removing impulse noise with a median filter. The wavelet network presented here is a fixed one without learning. Experimental results show that our method acts on impulse noise effectively, and at the same time preserves chromaticity and image details very well. PMID:23493998
Optic disc segmentation: level set methods and blood vessels inpainting
NASA Astrophysics Data System (ADS)
Almazroa, A.; Sun, Weiwei; Alodhayb, Sami; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2017-03-01
Segmenting the optic disc (OD) is an important and essential step in creating a frame of reference for diagnosing optic nerve head (ONH) pathology such as glaucoma. Therefore, a reliable OD segmentation technique is necessary for automatic screening of ONH abnormalities. The main contribution of this paper is in presenting a novel OD segmentation algorithm based on applying a level set method on a localized OD image. To prevent the blood vessels from interfering with the level set process, an inpainting technique is applied. The algorithm is evaluated using a new retinal fundus image dataset called RIGA (Retinal Images for Glaucoma Analysis). In the case of low quality images, a double level set is applied in which the first level set is considered to be a localization for the OD. Five hundred and fifty images are used to test the algorithm accuracy as well as its agreement with manual markings by six ophthalmologists. The accuracy of the algorithm in marking the optic disc area and centroid is 83.9%, and the best agreement is observed between the results of the algorithm and manual markings in 379 images.
NASA Astrophysics Data System (ADS)
Ai, Lingyu; Kim, Eun-Soo
2018-03-01
We propose a method for refocusing-range and image-quality enhanced optical reconstruction of three-dimensional (3-D) objects from integral images only by using a 3 × 3 periodic δ-function array (PDFA), which is called a principal PDFA (P-PDFA). By directly convolving the elemental image array (EIA) captured from 3-D objects with the P-PDFAs whose spatial periods correspond to each object's depth, a set of spatially-filtered EIAs (SF-EIAs) are extracted, and from which 3-D objects can be reconstructed to be refocused on their real depth. convolutional operations are performed directly on each of the minimum 3 × 3 EIs of the picked-up EIA, the capturing and refocused-depth ranges of 3-D objects can be greatly enhanced, as well as 3-D objects much improved in image quality can be reconstructed without any preprocessing operations. Through ray-optical analysis and optical experiments with actual 3-D objects, the feasibility of the proposed method has been confirmed.
Secure distribution for high resolution remote sensing images
NASA Astrophysics Data System (ADS)
Liu, Jin; Sun, Jing; Xu, Zheng Q.
2010-09-01
The use of remote sensing images collected by space platforms is becoming more and more widespread. The increasing value of space data and its use in critical scenarios call for adoption of proper security measures to protect these data against unauthorized access and fraudulent use. In this paper, based on the characteristics of remote sensing image data and application requirements on secure distribution, a secure distribution method is proposed, including users and regions classification, hierarchical control and keys generation, and multi-level encryption based on regions. The combination of the three parts can make that the same remote sensing images after multi-level encryption processing are distributed to different permission users through multicast, but different permission users can obtain different degree information after decryption through their own decryption keys. It well meets user access control and security needs in the process of high resolution remote sensing image distribution. The experimental results prove the effectiveness of the proposed method which is suitable for practical use in the secure transmission of remote sensing images including confidential information over internet.
NASA Astrophysics Data System (ADS)
Park, Y. O.; Hong, D. K.; Cho, H. S.; Je, U. K.; Oh, J. E.; Lee, M. S.; Kim, H. J.; Lee, S. H.; Jang, W. S.; Cho, H. M.; Choi, S. I.; Koo, Y. S.
2013-09-01
In this paper, we introduce an effective imaging system for digital tomosynthesis (DTS) with a circular X-ray tube, the so-called circular-DTS (CDTS) system, and its image reconstruction algorithm based on the total-variation (TV) minimization method for low-dose, high-accuracy X-ray imaging. Here, the X-ray tube is equipped with a series of cathodes distributed around a rotating anode, and the detector remains stationary throughout the image acquisition. We considered a TV-based reconstruction algorithm that exploited the sparsity of the image with substantially high image accuracy. We implemented the algorithm for the CDTS geometry and successfully reconstructed images of high accuracy. The image characteristics were investigated quantitatively by using some figures of merit, including the universal-quality index (UQI) and the depth resolution. For selected tomographic angles of 20, 40, and 60°, the corresponding UQI values in the tomographic view were estimated to be about 0.94, 0.97, and 0.98, and the depth resolutions were about 4.6, 3.1, and 1.2 voxels in full width at half maximum (FWHM), respectively. We expect the proposed method to be applicable to developing a next-generation dental or breast X-ray imaging system.
NASA Astrophysics Data System (ADS)
Lenkiewicz, Przemyslaw; Pereira, Manuela; Freire, Mário M.; Fernandes, José
2013-12-01
In this article, we propose a novel image segmentation method called the whole mesh deformation (WMD) model, which aims at addressing the problems of modern medical imaging. Such problems have raised from the combination of several factors: (1) significant growth of medical image volumes sizes due to increasing capabilities of medical acquisition devices; (2) the will to increase the complexity of image processing algorithms in order to explore new functionality; (3) change in processor development and turn towards multi processing units instead of growing bus speeds and the number of operations per second of a single processing unit. Our solution is based on the concept of deformable models and is characterized by a very effective and precise segmentation capability. The proposed WMD model uses a volumetric mesh instead of a contour or a surface to represent the segmented shapes of interest, which allows exploiting more information in the image and obtaining results in shorter times, independently of image contents. The model also offers a good ability for topology changes and allows effective parallelization of workflow, which makes it a very good choice for large datasets. We present a precise model description, followed by experiments on artificial images and real medical data.
Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J; Song, Xubo
2014-05-01
Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Experiments with simulated datasets, images of anex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors' method. Simulated and real cardiac sequences tests showed that results in the authors' method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors' method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors' method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors' method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods.
Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.
Skariah, Deepak G; Arigovindan, Muthuvel
2017-06-19
We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.
Illuminant color estimation based on pigmentation separation from human skin color
NASA Astrophysics Data System (ADS)
Tanaka, Satomi; Kakinuma, Akihiro; Kamijo, Naohiro; Takahashi, Hiroshi; Tsumura, Norimichi
2015-03-01
Human has the visual system called "color constancy" that maintains the perceptive colors of same object across various light sources. The effective method of color constancy algorithm was proposed to use the human facial color in a digital color image, however, this method has wrong estimation results by the difference of individual facial colors. In this paper, we present the novel color constancy algorithm based on skin color analysis. The skin color analysis is the method to separate the skin color into the components of melanin, hemoglobin and shading. We use the stationary property of Japanese facial color, and this property is calculated from the components of melanin and hemoglobin. As a result, we achieve to propose the method to use subject's facial color in image and not depend on the individual difference among Japanese facial color.
Calibration for single multi-mode fiber digital scanning microscopy imaging system
NASA Astrophysics Data System (ADS)
Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong
2015-11-01
Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
Thermal expansion coefficient determination of polylactic acid using digital image correlation
NASA Astrophysics Data System (ADS)
Botean, Adrian-Ioan
2018-02-01
This paper aims determining the linear thermal expansion coefficient (CTE) of polylactic acid (PLA) using an optical method for measuring deformations called digital image correlation method (DIC). Because PLA is often used in making many pieces with 3D printing technology, it is opportune to know this coefficient to obtain a higher degree of precision in the construction of parts and to monitor deformations when these parts are subjected to a thermal gradient. Are used two PLA discs with 20 and 40% degree of filling. In parallel with this approach was determined the linear thermal expansion coefficient (CTE) for the copper cylinder on the surface of which are placed the two discs of PLA.
Tao, Ran; Fletcher, P Thomas; Gerber, Samuel; Whitaker, Ross T
2009-01-01
This paper presents a method for correcting the geometric and greyscale distortions in diffusion-weighted MRI that result from inhomogeneities in the static magnetic field. These inhomogeneities may due to imperfections in the magnet or to spatial variations in the magnetic susceptibility of the object being imaged--so called susceptibility artifacts. Echo-planar imaging (EPI), used in virtually all diffusion weighted acquisition protocols, assumes a homogeneous static field, which generally does not hold for head MRI. The resulting distortions are significant, sometimes more than ten millimeters. These artifacts impede accurate alignment of diffusion images with structural MRI, and are generally considered an obstacle to the joint analysis of connectivity and structure in head MRI. In principle, susceptibility artifacts can be corrected by acquiring (and applying) a field map. However, as shown in the literature and demonstrated in this paper, field map corrections of susceptibility artifacts are not entirely accurate and reliable, and thus field maps do not produce reliable alignment of EPIs with corresponding structural images. This paper presents a new, image-based method for correcting susceptibility artifacts. The method relies on a variational formulation of the match between an EPI baseline image and a corresponding T2-weighted structural image but also specifically accounts for the physics of susceptibility artifacts. We derive a set of partial differential equations associated with the optimization, describe the numerical methods for solving these equations, and present results that demonstrate the effectiveness of the proposed method compared with field-map correction.
Yothers, Mitchell P; Browder, Aaron E; Bumm, Lloyd A
2017-01-01
We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.
NASA Astrophysics Data System (ADS)
Yothers, Mitchell P.; Browder, Aaron E.; Bumm, Lloyd A.
2017-01-01
We have developed a real-space method to correct distortion due to thermal drift and piezoelectric actuator nonlinearities on scanning tunneling microscope images using Matlab. The method uses the known structures typically present in high-resolution atomic and molecularly resolved images as an internal standard. Each image feature (atom or molecule) is first identified in the image. The locations of each feature's nearest neighbors are used to measure the local distortion at that location. The local distortion map across the image is simultaneously fit to our distortion model, which includes thermal drift in addition to piezoelectric actuator hysteresis and creep. The image coordinates of the features and image pixels are corrected using an inverse transform from the distortion model. We call this technique the thermal-drift, hysteresis, and creep transform. Performing the correction in real space allows defects, domain boundaries, and step edges to be excluded with a spatial mask. Additional real-space image analyses are now possible with these corrected images. Using graphite(0001) as a model system, we show lattice fitting to the corrected image, averaged unit cell images, and symmetry-averaged unit cell images. Statistical analysis of the distribution of the image features around their best-fit lattice sites measures the aggregate noise in the image, which can be expressed as feature confidence ellipsoids.
An earth imaging camera simulation using wide-scale construction of reflectance surfaces
NASA Astrophysics Data System (ADS)
Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk
2013-10-01
Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.
Semi-Supervised Marginal Fisher Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Huang, H.; Liu, J.; Pan, Y.
2012-07-01
The problem of learning with both labeled and unlabeled examples arises frequently in Hyperspectral image (HSI) classification. While marginal Fisher analysis is a supervised method, which cannot be directly applied for Semi-supervised classification. In this paper, we proposed a novel method, called semi-supervised marginal Fisher analysis (SSMFA), to process HSI of natural scenes, which uses a combination of semi-supervised learning and manifold learning. In SSMFA, a new difference-based optimization objective function with unlabeled samples has been designed. SSMFA preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution, and it can be computed based on eigen decomposition. Classification experiments with a challenging HSI task demonstrate that this method outperforms current state-of-the-art HSI-classification methods.
Two-dimensional angular transmission characterization of CPV modules.
Herrero, R; Domínguez, C; Askins, S; Antón, I; Sala, G
2010-11-08
This paper proposes a fast method to characterize the two-dimensional angular transmission function of a concentrator photovoltaic (CPV) system. The so-called inverse method, which has been used in the past for the characterization of small optical components, has been adapted to large-area CPV modules. In the inverse method, the receiver cell is forward biased to produce a Lambertian light emission, which reveals the reverse optical path of the optics. Using a large-area collimator mirror, the light beam exiting the optics is projected on a Lambertian screen to create a spatially resolved image of the angular transmission function. An image is then obtained using a CCD camera. To validate this method, the angular transmission functions of a real CPV module have been measured by both direct illumination (flash CPV simulator and sunlight) and the inverse method, and the comparison shows good agreement.
X-space MPI: magnetic nanoparticles for safe medical imaging.
Goodwill, Patrick William; Saritas, Emine Ulku; Croft, Laura Rose; Kim, Tyson N; Krishnan, Kannan M; Schaffer, David V; Conolly, Steven M
2012-07-24
One quarter of all iodinated contrast X-ray clinical imaging studies are now performed on Chronic Kidney Disease (CKD) patients. Unfortunately, the iodine contrast agent used in X-ray is often toxic to CKD patients' weak kidneys, leading to significant morbidity and mortality. Hence, we are pioneering a new medical imaging method, called Magnetic Particle Imaging (MPI), to replace X-ray and CT iodinated angiography, especially for CKD patients. MPI uses magnetic nanoparticle contrast agents that are much safer than iodine for CKD patients. MPI already offers superb contrast and extraordinary sensitivity. The iron oxide nanoparticle tracers required for MPI are also used in MRI, and some are already approved for human use, but the contrast agents are far more effective at illuminating blood vessels when used in the MPI modality. We have recently developed a systems theoretic framework for MPI called x-space MPI, which has already dramatically improved the speed and robustness of MPI image reconstruction. X-space MPI has allowed us to optimize the hardware for fi ve MPI scanners. Moreover, x-space MPI provides a powerful framework for optimizing the size and magnetic properties of the iron oxide nanoparticle tracers used in MPI. Currently MPI nanoparticles have diameters in the 10-20 nanometer range, enabling millimeter-scale resolution in small animals. X-space MPI theory predicts that larger nanoparticles could enable up to 250 micrometer resolution imaging, which would represent a major breakthrough in safe imaging for CKD patients.
NASA Astrophysics Data System (ADS)
Rössler, Tomáš; Hrabovský, Miroslav; Pluháček, František
2005-08-01
The cotyle implantate is abraded in the body of patient and its shape changes. Information about the magnitude of abrasion is contained in the result contour map of the implantate. The locations and dimensions of abraded areas can be computed from the contours deformation. The method called the single-projector moire topography was used for the contour lines determination. The theoretical description of method is given at first. The design of the experimental set-up follows. The light grating projector was developed to realize the periodic structure on the measured surface. The method of fringe-shifting was carried out to increase the data quantity. The description of digital processing applied to the moire grating images is introduced at the end together with the examples of processed images.
Classification and pose estimation of objects using nonlinear features
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-03-01
A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.
Parallel halftoning technique using dot diffusion optimization
NASA Astrophysics Data System (ADS)
Molina-Garcia, Javier; Ponomaryov, Volodymyr I.; Reyes-Reyes, Rogelio; Cruz-Ramos, Clara
2017-05-01
In this paper, a novel approach for halftone images is proposed and implemented for images that are obtained by the Dot Diffusion (DD) method. Designed technique is based on an optimization of the so-called class matrix used in DD algorithm and it consists of generation new versions of class matrix, which has no baron and near-baron in order to minimize inconsistencies during the distribution of the error. Proposed class matrix has different properties and each is designed for two different applications: applications where the inverse-halftoning is necessary, and applications where this method is not required. The proposed method has been implemented in GPU (NVIDIA GeForce GTX 750 Ti), multicore processors (AMD FX(tm)-6300 Six-Core Processor and in Intel core i5-4200U), using CUDA and OpenCV over a PC with linux. Experimental results have shown that novel framework generates a good quality of the halftone images and the inverse halftone images obtained. The simulation results using parallel architectures have demonstrated the efficiency of the novel technique when it is implemented in real-time processing.
Image scale measurement with correlation filters in a volume holographic optical correlator
NASA Astrophysics Data System (ADS)
Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan
2013-08-01
A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.
Shadow detection and removal in RGB VHR images for land use unsupervised classification
NASA Astrophysics Data System (ADS)
Movia, A.; Beinat, A.; Crosilla, F.
2016-09-01
Nowadays, high resolution aerial images are widely available thanks to the diffusion of advanced technologies such as UAVs (Unmanned Aerial Vehicles) and new satellite missions. Although these developments offer new opportunities for accurate land use analysis and change detection, cloud and terrain shadows actually limit benefits and possibilities of modern sensors. Focusing on the problem of shadow detection and removal in VHR color images, the paper proposes new solutions and analyses how they can enhance common unsupervised classification procedures for identifying land use classes related to the CO2 absorption. To this aim, an improved fully automatic procedure has been developed for detecting image shadows using exclusively RGB color information, and avoiding user interaction. Results show a significant accuracy enhancement with respect to similar methods using RGB based indexes. Furthermore, novel solutions derived from Procrustes analysis have been applied to remove shadows and restore brightness in the images. In particular, two methods implementing the so called "anisotropic Procrustes" and the "not-centered oblique Procrustes" algorithms have been developed and compared with the linear correlation correction method based on the Cholesky decomposition. To assess how shadow removal can enhance unsupervised classifications, results obtained with classical methods such as k-means, maximum likelihood, and self-organizing maps, have been compared to each other and with a supervised clustering procedure.
Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring
Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu
2013-01-01
Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551
Lopez, Xavier Moles; Debeir, Olivier; Maris, Calliope; Rorive, Sandrine; Roland, Isabelle; Saerens, Marco; Salmon, Isabelle; Decaestecker, Christine
2012-09-01
Whole-slide scanners allow the digitization of an entire histological slide at very high resolution. This new acquisition technique opens a wide range of possibilities for addressing challenging image analysis problems, including the identification of tissue-based biomarkers. In this study, we use whole-slide scanner technology for imaging the proliferating activity patterns in tumor slides based on Ki67 immunohistochemistry. Faced with large images, pathologists require tools that can help them identify tumor regions that exhibit high proliferating activity, called "hot-spots" (HSs). Pathologists need tools that can quantitatively characterize these HS patterns. To respond to this clinical need, the present study investigates various clustering methods with the aim of identifying Ki67 HSs in whole tumor slide images. This task requires a method capable of identifying an unknown number of clusters, which may be highly variable in terms of shape, size, and density. We developed a hybrid clustering method, referred to as Seedlink. Compared to manual HS selections by three pathologists, we show that Seedlink provides an efficient way of detecting Ki67 HSs and improves the agreement among pathologists when identifying HSs. Copyright © 2012 International Society for Advancement of Cytometry.
Three-dimensional image signals: processing methods
NASA Astrophysics Data System (ADS)
Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru
2010-11-01
Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.
Pattern-histogram-based temporal change detection using personal chest radiographs
NASA Astrophysics Data System (ADS)
Ugurlu, Yucel; Obi, Takashi; Hasegawa, Akira; Yamaguchi, Masahiro; Ohyama, Nagaaki
1999-05-01
An accurate and reliable detection of temporal changes from a pair of images has considerable interest in the medical science. Traditional registration and subtraction techniques can be applied to extract temporal differences when,the object is rigid or corresponding points are obvious. However, in radiological imaging, loss of the depth information, the elasticity of object, the absence of clearly defined landmarks and three-dimensional positioning differences constraint the performance of conventional registration techniques. In this paper, we propose a new method in order to detect interval changes accurately without using an image registration technique. The method is based on construction of so-called pattern histogram and comparison procedure. The pattern histogram is a graphic representation of the frequency counts of all allowable patterns in the multi-dimensional pattern vector space. K-means algorithm is employed to partition pattern vector space successively. Any differences in the pattern histograms imply that different patterns are involved in the scenes. In our experiment, a pair of chest radiographs of pneumoconiosis is employed and the changing histogram bins are visualized on both of the images. We found that the method can be used as an alternative way of temporal change detection, particularly when the precise image registration is not available.
Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A
2015-02-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.
Noll, Douglas C.; Fessler, Jeffrey A.
2014-01-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484
NASA Technical Reports Server (NTRS)
Houlahan, Padraig; Scalo, John
1992-01-01
A new method of image analysis is described, in which images partitioned into 'clouds' are represented by simplified skeleton images, called structure trees, that preserve the spatial relations of the component clouds while disregarding information concerning their sizes and shapes. The method can be used to discriminate between images of projected hierarchical (multiply nested) and random three-dimensional simulated collections of clouds constructed on the basis of observed interstellar properties, and even intermediate systems formed by combining random and hierarchical simulations. For a given structure type, the method can distinguish between different subclasses of models with different parameters and reliably estimate their hierarchical parameters: average number of children per parent, scale reduction factor per level of hierarchy, density contrast, and number of resolved levels. An application to a column density image of the Taurus complex constructed from IRAS data is given. Moderately strong evidence for a hierarchical structural component is found, and parameters of the hierarchy, as well as the average volume filling factor and mass efficiency of fragmentation per level of hierarchy, are estimated. The existence of nested structure contradicts models in which large molecular clouds are supposed to fragment, in a single stage, into roughly stellar-mass cores.
T2 shuffling: Sharp, multicontrast, volumetric fast spin-echo imaging.
Tamir, Jonathan I; Uecker, Martin; Chen, Weitian; Lai, Peng; Alley, Marcus T; Vasanawala, Shreyas S; Lustig, Michael
2017-01-01
A new acquisition and reconstruction method called T 2 Shuffling is presented for volumetric fast spin-echo (three-dimensional [3D] FSE) imaging. T 2 Shuffling reduces blurring and recovers many images at multiple T 2 contrasts from a single acquisition at clinically feasible scan times (6-7 min). The parallel imaging forward model is modified to account for temporal signal relaxation during the echo train. Scan efficiency is improved by acquiring data during the transient signal decay and by increasing echo train lengths without loss in signal-to-noise ratio (SNR). By (1) randomly shuffling the phase encode view ordering, (2) constraining the temporal signal evolution to a low-dimensional subspace, and (3) promoting spatio-temporal correlations through locally low rank regularization, a time series of virtual echo time images is recovered from a single scan. A convex formulation is presented that is robust to partial voluming and radiofrequency field inhomogeneity. Retrospective undersampling and in vivo scans confirm the increase in sharpness afforded by T 2 Shuffling. Multiple image contrasts are recovered and used to highlight pathology in pediatric patients. A proof-of-principle method is integrated into a clinical musculoskeletal imaging workflow. The proposed T 2 Shuffling method improves the diagnostic utility of 3D FSE by reducing blurring and producing multiple image contrasts from a single scan. Magn Reson Med 77:180-195, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
High resolution human diffusion tensor imaging using 2-D navigated multi-shot SENSE EPI at 7 Tesla
Jeong, Ha-Kyu; Gore, John C.; Anderson, Adam W.
2012-01-01
The combination of parallel imaging with partial Fourier acquisition has greatly improved the performance of diffusion-weighted single-shot EPI and is the preferred method for acquisitions at low to medium magnetic field strength such as 1.5 or 3 Tesla. Increased off-resonance effects and reduced transverse relaxation times at 7 Tesla, however, generate more significant artifacts than at lower magnetic field strength and limit data acquisition. Additional acceleration of k-space traversal using a multi-shot approach, which acquires a subset of k-space data after each excitation, reduces these artifacts relative to conventional single-shot acquisitions. However, corrections for motion-induced phase errors are not straightforward in accelerated, diffusion-weighted multi-shot EPI because of phase aliasing. In this study, we introduce a simple acquisition and corresponding reconstruction method for diffusion-weighted multi-shot EPI with parallel imaging suitable for use at high field. The reconstruction uses a simple modification of the standard SENSE algorithm to account for shot-to-shot phase errors; the method is called Image Reconstruction using Image-space Sampling functions (IRIS). Using this approach, reconstruction from highly aliased in vivo image data using 2-D navigator phase information is demonstrated for human diffusion-weighted imaging studies at 7 Tesla. The final reconstructed images show submillimeter in-plane resolution with no ghosts and much reduced blurring and off-resonance artifacts. PMID:22592941
Profile fitting in crowded astronomical images
NASA Astrophysics Data System (ADS)
Manish, Raja
Around 18,000 known objects currently populate the near Earth space. These constitute active space assets as well as space debris objects. The tracking and cataloging of such objects relies on observations, most of which are ground based. Also, because of the great distance to the objects, only non-resolved object images can be obtained from the observations. Optical systems consist of telescope optics and a detector. Nowadays, usually CCD detectors are used. The information that is sought to be extracted from the frames are the individual object's astrometric position. In order to do so, the center of the object's image on the CCD frame has to be found. However, the observation frames that are read out of the detector are subject to noise. There are three different sources of noise: celestial background sources, the object signal itself and the sensor noise. The noise statistics are usually modeled as Gaussian or Poisson distributed or their combined distribution. In order to achieve a near real time processing, computationally fast and reliable methods for the so-called centroiding are desired; analytical methods are preferred over numerical ones of comparable accuracy. In this work, an analytic method for the centroiding is investigated and compared to numerical methods. Though the work focuses mainly on astronomical images, same principle could be applied on non-celestial images containing similar data. The method is based on minimizing weighted least squared (LS) error between observed data and the theoretical model of point sources in a novel yet simple way. Synthetic image frames have been simulated. The newly developed method is tested in both crowded and non-crowded fields where former needs additional image handling procedures to separate closely packed objects. Subsequent analysis on real celestial images corroborate the effectiveness of the approach.
An incompressible fluid flow model with mutual information for MR image registration
NASA Astrophysics Data System (ADS)
Tsai, Leo; Chang, Herng-Hua
2013-03-01
Image registration is one of the fundamental and essential tasks within image processing. It is a process of determining the correspondence between structures in two images, which are called the template image and the reference image, respectively. The challenge of registration is to find an optimal geometric transformation between corresponding image data. This paper develops a new MR image registration algorithm that uses a closed incompressible viscous fluid model associated with mutual information. In our approach, we treat the image pixels as the fluid elements of a viscous fluid flow governed by the nonlinear Navier-Stokes partial differential equation (PDE). We replace the pressure term with the body force mainly used to guide the transformation with a weighting coefficient, which is expressed by the mutual information between the template and reference images. To solve this modified Navier-Stokes PDE, we adopted the fast numerical techniques proposed by Seibold1. The registration process of updating the body force, the velocity and deformation fields is repeated until the mutual information weight reaches a prescribed threshold. We applied our approach to the BrainWeb and real MR images. As consistent with the theory of the proposed fluid model, we found that our method accurately transformed the template images into the reference images based on the intensity flow. Experimental results indicate that our method is of potential in a wide variety of medical image registration applications.
Volumetric MRI of the lungs during forced expiration.
Berman, Benjamin P; Pandey, Abhishek; Li, Zhitao; Jeffries, Lindsie; Trouard, Theodore P; Oliva, Isabel; Cortopassi, Felipe; Martin, Diego R; Altbach, Maria I; Bilgin, Ali
2016-06-01
Lung function is typically characterized by spirometer measurements, which do not offer spatially specific information. Imaging during exhalation provides spatial information but is challenging due to large movement over a short time. The purpose of this work is to provide a solution to lung imaging during forced expiration using accelerated magnetic resonance imaging. The method uses radial golden angle stack-of-stars gradient echo acquisition and compressed sensing reconstruction. A technique for dynamic three-dimensional imaging of the lungs from highly undersampled data is developed and tested on six subjects. This method takes advantage of image sparsity, both spatially and temporally, including the use of reference frames called bookends. Sparsity, with respect to total variation, and residual from the bookends, enables reconstruction from an extremely limited amount of data. Dynamic three-dimensional images can be captured at sub-150 ms temporal resolution, using only three (or less) acquired radial lines per slice per timepoint. The images have a spatial resolution of 4.6×4.6×10 mm. Lung volume calculations based on image segmentation are compared to those from simultaneously acquired spirometer measurements. Dynamic lung imaging during forced expiration is made possible by compressed sensing accelerated dynamic three-dimensional radial magnetic resonance imaging. Magn Reson Med 75:2295-2302, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Efficient bias correction for magnetic resonance image denoising.
Mukherjee, Partha Sarathi; Qiu, Peihua
2013-05-30
Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.
Imaging brain tumour microstructure.
Nilsson, Markus; Englund, Elisabet; Szczepankiewicz, Filip; van Westen, Danielle; Sundgren, Pia C
2018-05-08
Imaging is an indispensable tool for brain tumour diagnosis, surgical planning, and follow-up. Definite diagnosis, however, often demands histopathological analysis of microscopic features of tissue samples, which have to be obtained by invasive means. A non-invasive alternative may be to probe corresponding microscopic tissue characteristics by MRI, or so called 'microstructure imaging'. The promise of microstructure imaging is one of 'virtual biopsy' with the goal to offset the need for invasive procedures in favour of imaging that can guide pre-surgical planning and can be repeated longitudinally to monitor and predict treatment response. The exploration of such methods is motivated by the striking link between parameters from MRI and tumour histology, for example the correlation between the apparent diffusion coefficient and cellularity. Recent microstructure imaging techniques probe even more subtle and specific features, providing parameters associated to cell shape, size, permeability, and volume distributions. However, the range of scenarios in which these techniques provide reliable imaging biomarkers that can be used to test medical hypotheses or support clinical decisions is yet unknown. Accurate microstructure imaging may moreover require acquisitions that go beyond conventional data acquisition strategies. This review covers a wide range of candidate microstructure imaging methods based on diffusion MRI and relaxometry, and explores advantages, challenges, and potential pitfalls in brain tumour microstructure imaging. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Ma, Dan; Liu, Jun; Chen, Kai; Li, Huali; Liu, Ping; Chen, Huijuan; Qian, Jing
2016-04-01
In remote sensing fusion, the spatial details of a panchromatic (PAN) image and the spectrum information of multispectral (MS) images will be transferred into fused images according to the characteristics of the human visual system. Thus, a remote sensing image fusion quality assessment called feature-based fourth-order correlation coefficient (FFOCC) is proposed. FFOCC is based on the feature-based coefficient concept. Spatial features related to spatial details of the PAN image and spectral features related to the spectrum information of MS images are first extracted from the fused image. Then, the fourth-order correlation coefficient between the spatial and spectral features is calculated and treated as the assessment result. FFOCC was then compared with existing widely used indices, such as Erreur Relative Globale Adimensionnelle de Synthese, and quality assessed with no reference. Results of the fusion and distortion experiments indicate that the FFOCC is consistent with subjective evaluation. FFOCC significantly outperforms the other indices in evaluating fusion images that are produced by different fusion methods and that are distorted in spatial and spectral features by blurring, adding noise, and changing intensity. All the findings indicate that the proposed method is an objective and effective quality assessment for remote sensing image fusion.
Validation of no-reference image quality index for the assessment of digital mammographic images
NASA Astrophysics Data System (ADS)
de Oliveira, Helder C. R.; Barufaldi, Bruno; Borges, Lucas R.; Gabarda, Salvador; Bakic, Predrag R.; Maidment, Andrew D. A.; Schiabel, Homero; Vieira, Marcelo A. C.
2016-03-01
To ensure optimal clinical performance of digital mammography, it is necessary to obtain images with high spatial resolution and low noise, keeping radiation exposure as low as possible. These requirements directly affect the interpretation of radiologists. The quality of a digital image should be assessed using objective measurements. In general, these methods measure the similarity between a degraded image and an ideal image without degradation (ground-truth), used as a reference. These methods are called Full-Reference Image Quality Assessment (FR-IQA). However, for digital mammography, an image without degradation is not available in clinical practice; thus, an objective method to assess the quality of mammograms must be performed without reference. The purpose of this study is to present a Normalized Anisotropic Quality Index (NAQI), based on the Rényi entropy in the pseudo-Wigner domain, to assess mammography images in terms of spatial resolution and noise without any reference. The method was validated using synthetic images acquired through an anthropomorphic breast software phantom, and the clinical exposures on anthropomorphic breast physical phantoms and patient's mammograms. The results reported by this noreference index follow the same behavior as other well-established full-reference metrics, e.g., the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Reductions of 50% on the radiation dose in phantom images were translated as a decrease of 4dB on the PSNR, 25% on the SSIM and 33% on the NAQI, evidencing that the proposed metric is sensitive to the noise resulted from dose reduction. The clinical results showed that images reduced to 53% and 30% of the standard radiation dose reported reductions of 15% and 25% on the NAQI, respectively. Thus, this index may be used in clinical practice as an image quality indicator to improve the quality assurance programs in mammography; hence, the proposed method reduces the subjectivity inter-observers in the reporting of image quality assessment.
ERIC Educational Resources Information Center
Berg, Tanya
2017-01-01
This case study explores one teacher's integration of Alexander Technique and the work of neuromuscular retrainer Irene Dowd in ballet pedagogy to establish a somatic approach to teaching, learning, and performing ballet technique. This case study highlights the teacher's unique teaching method called IMAGE TECH for dancers (ITD) and offers…
Measuring food intake with digital photography.
Martin, C K; Nicklas, T; Gunturk, B; Correa, J B; Allen, H R; Champagne, C
2014-01-01
The digital photography of foods method accurately estimates the food intake of adults and children in cafeterias. When using this method, images of food selection and leftovers are quickly captured in the cafeteria. These images are later compared with images of 'standard' portions of food using computer software. The amount of food selected and discarded is estimated based upon this comparison, and the application automatically calculates energy and nutrient intake. In the present review, we describe this method, as well as a related method called the Remote Food Photography Method (RFPM), which relies on smartphones to estimate food intake in near real-time in free-living conditions. When using the RFPM, participants capture images of food selection and leftovers using a smartphone and these images are wirelessly transmitted in near real-time to a server for analysis. Because data are transferred and analysed in near real-time, the RFPM provides a platform for participants to quickly receive feedback about their food intake behaviour and to receive dietary recommendations for achieving weight loss and health promotion goals. The reliability and validity of measuring food intake with the RFPM in adults and children is also reviewed. In sum, the body of research reviewed demonstrates that digital imaging accurately estimates food intake in many environments and it has many advantages over other methods, including reduced participant burden, elimination of the need for participants to estimate portion size, and the incorporation of computer automation to improve the accuracy, efficiency and cost-effectiveness of the method. © 2013 The British Dietetic Association Ltd.
[The eye, the optic system and its anomalies].
Cohen, S Y
1993-09-15
The eye is a perceptive system with extremely complex physiology, although its optical properties can be assimilated to those of spherical diopters. Various approximations make it possible to reduce the eyeball to a single convex diopter. With a normal eye the image of an object situated ad infinitum focuses on the retina. The normal eye is called emmetropic. Otherwise, the eye is called ametropic. Several types of ametropy exist. When the image focuses in front of the retina the eye is said to be myopic. When the image focuses behind the retina the eye is called hypermetropic (or hyperopic). When the image of an object differs according to various focusing axes, the eye is said to be astigmatic.
Matsuda, Atsushi; Schermelleh, Lothar; Hirano, Yasuhiro; Haraguchi, Tokuko; Hiraoka, Yasushi
2018-05-15
Correction of chromatic shift is necessary for precise registration of multicolor fluorescence images of biological specimens. New emerging technologies in fluorescence microscopy with increasing spatial resolution and penetration depth have prompted the need for more accurate methods to correct chromatic aberration. However, the amount of chromatic shift of the region of interest in biological samples often deviates from the theoretical prediction because of unknown dispersion in the biological samples. To measure and correct chromatic shift in biological samples, we developed a quadrisection phase correlation approach to computationally calculate translation, rotation, and magnification from reference images. Furthermore, to account for local chromatic shifts, images are split into smaller elements, for which the phase correlation between channels is measured individually and corrected accordingly. We implemented this method in an easy-to-use open-source software package, called Chromagnon, that is able to correct shifts with a 3D accuracy of approximately 15 nm. Applying this software, we quantified the level of uncertainty in chromatic shift correction, depending on the imaging modality used, and for different existing calibration methods, along with the proposed one. Finally, we provide guidelines to choose the optimal chromatic shift registration method for any given situation.
Manifold learning of brain MRIs by deep learning.
Brosch, Tom; Tam, Roger
2013-01-01
Manifold learning of medical images plays a potentially important role for modeling anatomical variability within a population with pplications that include segmentation, registration, and prediction of clinical parameters. This paper describes a novel method for learning the manifold of 3D brain images that, unlike most existing manifold learning methods, does not require the manifold space to be locally linear, and does not require a predefined similarity measure or a prebuilt proximity graph. Our manifold learning method is based on deep learning, a machine learning approach that uses layered networks (called deep belief networks, or DBNs) and has received much attention recently in the computer vision field due to their success in object recognition tasks. DBNs have traditionally been too computationally expensive for application to 3D images due to the large number of trainable parameters. Our primary contributions are (1) a much more computationally efficient training method for DBNs that makes training on 3D medical images with a resolution of up to 128 x 128 x 128 practical, and (2) the demonstration that DBNs can learn a low-dimensional manifold of brain volumes that detects modes of variations that correlate to demographic and disease parameters.
NASA Astrophysics Data System (ADS)
Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.
2018-04-01
The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.
NASA Astrophysics Data System (ADS)
Mehta, Shalin B.; Sheppard, Colin J. R.
2010-05-01
Various methods that use large illumination aperture (i.e. partially coherent illumination) have been developed for making transparent (i.e. phase) specimens visible. These methods were developed to provide qualitative contrast rather than quantitative measurement-coherent illumination has been relied upon for quantitative phase analysis. Partially coherent illumination has some important advantages over coherent illumination and can be used for measurement of the specimen's phase distribution. However, quantitative analysis and image computation in partially coherent systems have not been explored fully due to the lack of a general, physically insightful and computationally efficient model of image formation. We have developed a phase-space model that satisfies these requirements. In this paper, we employ this model (called the phase-space imager) to elucidate five different partially coherent systems mentioned in the title. We compute images of an optical fiber under these systems and verify some of them with experimental images. These results and simulated images of a general phase profile are used to compare the contrast and the resolution of the imaging systems. We show that, for quantitative phase imaging of a thin specimen with matched illumination, differential phase contrast offers linear transfer of specimen information to the image. We also show that the edge enhancement properties of spiral phase contrast are compromised significantly as the coherence of illumination is reduced. The results demonstrate that the phase-space imager model provides a useful framework for analysis, calibration, and design of partially coherent imaging methods.
Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.
Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng
2015-11-01
Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.
Thermographic image analysis as a pre-screening tool for the detection of canine bone cancer
NASA Astrophysics Data System (ADS)
Subedi, Samrat; Umbaugh, Scott E.; Fu, Jiyuan; Marino, Dominic J.; Loughin, Catherine A.; Sackman, Joseph
2014-09-01
Canine bone cancer is a common type of cancer that grows fast and may be fatal. It usually appears in the limbs which is called "appendicular bone cancer." Diagnostic imaging methods such as X-rays, computed tomography (CT scan), and magnetic resonance imaging (MRI) are more common methods in bone cancer detection than invasive physical examination such as biopsy. These imaging methods have some disadvantages; including high expense, high dose of radiation, and keeping the patient (canine) motionless during the imaging procedures. This project study identifies the possibility of using thermographic images as a pre-screening tool for diagnosis of bone cancer in dogs. Experiments were performed with thermographic images from 40 dogs exhibiting the disease bone cancer. Experiments were performed with color normalization using temperature data provided by the Long Island Veterinary Specialists. The images were first divided into four groups according to body parts (Elbow/Knee, Full Limb, Shoulder/Hip and Wrist). Each of the groups was then further divided into three sub-groups according to views (Anterior, Lateral and Posterior). Thermographic pattern of normal and abnormal dogs were analyzed using feature extraction and pattern classification tools. Texture features, spectral feature and histogram features were extracted from the thermograms and were used for pattern classification. The best classification success rate in canine bone cancer detection is 90% with sensitivity of 100% and specificity of 80% produced by anterior view of full-limb region with nearest neighbor classification method and normRGB-lum color normalization method. Our results show that it is possible to use thermographic imaging as a pre-screening tool for detection of canine bone cancer.
Research on Wide-field Imaging Technologies for Low-frequency Radio Array
NASA Astrophysics Data System (ADS)
Lao, B. Q.; An, T.; Chen, X.; Wu, X. C.; Lu, Y.
2017-09-01
Wide-field imaging of low-frequency radio telescopes are subject to a number of difficult problems. One particularly pernicious problem is the non-coplanar baseline effect. It will lead to distortion of the final image when the phase of w direction called w-term is ignored. The image degradation effects are amplified for telescopes with the wide field of view. This paper summarizes and analyzes several w-term correction methods and their technical principles. Their advantages and disadvantages have been analyzed after comparing their computational cost and computational complexity. We conduct simulations with two of these methods, faceting and w-projection, based on the configuration of the first-phase Square Kilometre Array (SKA) low frequency array. The resulted images are also compared with the two-dimensional Fourier transform method. The results show that image quality and correctness derived from both faceting and w-projection are better than the two-dimensional Fourier transform method in wide-field imaging. The image quality and run time affected by the number of facets and w steps have been evaluated. The results indicate that the number of facets and w steps must be reasonable. Finally, we analyze the effect of data size on the run time of faceting and w-projection. The results show that faceting and w-projection need to be optimized before the massive amounts of data processing. The research of the present paper initiates the analysis of wide-field imaging techniques and their application in the existing and future low-frequency array, and fosters the application and promotion to much broader fields.
NASA Astrophysics Data System (ADS)
Kadrmas, Dan J.; Frey, Eric C.; Karimi, Seemeen S.; Tsui, Benjamin M. W.
1998-04-01
Accurate scatter compensation in SPECT can be performed by modelling the scatter response function during the reconstruction process. This method is called reconstruction-based scatter compensation (RBSC). It has been shown that RBSC has a number of advantages over other methods of compensating for scatter, but using RBSC for fully 3D compensation has resulted in prohibitively long reconstruction times. In this work we propose two new methods that can be used in conjunction with existing methods to achieve marked reductions in RBSC reconstruction times. The first method, coarse-grid scatter modelling, significantly accelerates the scatter model by exploiting the fact that scatter is dominated by low-frequency information. The second method, intermittent RBSC, further accelerates the reconstruction process by limiting the number of iterations during which scatter is modelled. The fast implementations were evaluated using a Monte Carlo simulated experiment of the 3D MCAT phantom with
tracer, and also using experimentally acquired data with
tracer. Results indicated that these fast methods can reconstruct, with fully 3D compensation, images very similar to those obtained using standard RBSC methods, and in reconstruction times that are an order of magnitude shorter. Using these methods, fully 3D iterative reconstruction with RBSC can be performed well within the realm of clinically realistic times (under 10 minutes for
image reconstruction).
Zhang, Jinjin; Idiyatullin, Djaudat; Corum, Curtis A.; Kobayashi, Naoharu; Garwood, Michael
2017-01-01
Purpose Methods designed to image fast-relaxing spins, such as sweep imaging with Fourier transformation (SWIFT), often utilize high excitation bandwidth and duty cycle, and in some applications the optimal flip angle cannot be used without exceeding safe specific absorption rate (SAR) levels. The aim is to reduce SAR and increase the flexibility of SWIFT by applying time-varying gradient-modulation (GM). The modified sequence is called GM-SWIFT. Theory and Methods The method known as gradient-modulated offset independent adiabaticity was used to modulate the radiofrequency (RF) pulse and gradients. An expanded correlation algorithm was developed for GM-SWIFT to correct the phase and scale effects. Simulations and phantom and in vivo human experiments were performed to verify the correlation algorithm and to evaluate imaging performance. Results GM-SWIFT reduces SAR, RF amplitude, and acquisition time by up to 90%, 70%, and 45%, respectively, while maintaining image quality. The choice of GM parameter influences the lower limit of short T2* sensitivity, which can be exploited to suppress unwanted image haze from unresolvable ultrashort T2* signals originating from plastic materials in the coil housing and fixatives. Conclusions GM-SWIFT reduces peak and total RF power requirements and provides additional flexibility for optimizing SAR, RF amplitude, scan time, and image quality. PMID:25800547
Interactive searching of facial image databases
NASA Astrophysics Data System (ADS)
Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean
1995-09-01
A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.
Screening mail for powders using terahertz technology
NASA Astrophysics Data System (ADS)
Kemp, Mike
2011-11-01
Following the 2001 Anthrax letter attacks in the USA, there has been a continuing interest in techniques that can detect or identify so-called 'white powder' concealed in envelopes. Electromagnetic waves (wavelengths 100-500 μm) in the terahertz frequency range penetrate paper and have short enough wavelengths to provide good resolution images; some materials also have spectroscopic signatures in the terahertz region. We report on an experimental study into the use of terahertz imaging and spectroscopy for mail screening. Spectroscopic signatures of target powders were measured and, using a specially designed test rig, a number of imaging methods based on reflection, transmission and scattering were investigated. It was found that, contrary to some previous reports, bacterial spores do not appear to have any strong spectroscopic signatures which would enable them to be identified. Imaging techniques based on reflection imaging and scattering are ineffective in this application, due to the similarities in optical properties between powders of interest and paper. However, transmission imaging using time-of-flight of terahertz pulses was found to be a very simple and sensitive method of detecting small quantities (25 mg) of powder, even in quite thick envelopes. An initial feasibility study indicates that this method could be used as the basis of a practical mail screening system.
Rotationally Invariant Image Representation for Viewing Direction Classification in Cryo-EM
Zhao, Zhizhen; Singer, Amit
2014-01-01
We introduce a new rotationally invariant viewing angle classification method for identifying, among a large number of cryo-EM projection images, similar views without prior knowledge of the molecule. Our rotationally invariant features are based on the bispectrum. Each image is denoised and compressed using steerable principal component analysis (PCA) such that rotating an image is equivalent to phase shifting the expansion coefficients. Thus we are able to extend the theory of bispectrum of 1D periodic signals to 2D images. The randomized PCA algorithm is then used to efficiently reduce the dimensionality of the bispectrum coefficients, enabling fast computation of the similarity between any pair of images. The nearest neighbors provide an initial classification of similar viewing angles. In this way, rotational alignment is only performed for images with their nearest neighbors. The initial nearest neighbor classification and alignment are further improved by a new classification method called vector diffusion maps. Our pipeline for viewing angle classification and alignment is experimentally shown to be faster and more accurate than reference-free alignment with rotationally invariant K-means clustering, MSA/MRA 2D classification, and their modern approximations. PMID:24631969
NASA Astrophysics Data System (ADS)
Rand, Danielle; Derdak, Zoltan; Carlson, Rolf; Wands, Jack R.; Rose-Petruck, Christoph
2015-10-01
Hepatocellular carcinoma (HCC) is one of the most common malignant tumors worldwide and is almost uniformly fatal. Current methods of detection include ultrasound examination and imaging by CT scan or MRI; however, these techniques are problematic in terms of sensitivity and specificity, and the detection of early tumors (<1 cm diameter) has proven elusive. Better, more specific, and more sensitive detection methods are therefore urgently needed. Here we discuss the application of a newly developed x-ray imaging technique called Spatial Frequency Heterodyne Imaging (SFHI) for the early detection of HCC. SFHI uses x-rays scattered by an object to form an image and is more sensitive than conventional absorption-based x-radiography. We show that tissues labeled in vivo with gold nanoparticle contrast agents can be detected using SFHI. We also demonstrate that directed targeting and SFHI of HCC tumors in a mouse model is possible through the use of HCC-specific antibodies. The enhanced sensitivity of SFHI relative to currently available techniques enables the x-ray imaging of tumors that are just a few millimeters in diameter and substantially reduces the amount of nanoparticle contrast agent required for intravenous injection relative to absorption-based x-ray imaging.
DECONV-TOOL: An IDL based deconvolution software package
NASA Technical Reports Server (NTRS)
Varosi, F.; Landsman, W. B.
1992-01-01
There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.
Dual-resolution image reconstruction for region-of-interest CT scan
NASA Astrophysics Data System (ADS)
Jin, S. O.; Shin, K. Y.; Yoo, S. K.; Kim, J. G.; Kim, K. H.; Huh, Y.; Lee, S. Y.; Kwon, O.-K.
2014-07-01
In ordinary CT scan, so called full field-of-view (FFOV) scan, in which the x-ray beam span covers the whole section of the body, a large number of projections are necessary to reconstruct high resolution images. However, excessive x-ray dose is a great concern in FFOV scan. Region-of-interest (ROI) scan is a method to visualize the ROI in high resolution while reducing the x-ray dose. But, ROI scan suffers from bright-band artifacts which may hamper CT-number accuracy. In this study, we propose an image reconstruction method to eliminate the band artifacts in the ROI scan. In addition to the ROI scan with high sampling rate in the view direction, we get FFOV projection data with much lower sampling rate. Then, we reconstruct images in the compressed sensing (CS) framework with dual resolutions, that is, high resolution in the ROI and low resolution outside the ROI. For the dual-resolution image reconstruction, we implemented the dual-CS reconstruction algorithm in which data fidelity and total variation (TV) terms were enforced twice in the framework of adaptive steepest descent projection onto convex sets (ASD-POCS). The proposed method has remarkably reduced the bright-band artifacts at around the ROI boundary, and it has also effectively suppressed the streak artifacts over the entire image. We expect the proposed method can be greatly used for dual-resolution imaging with reducing the radiation dose, artifacts and scan time.
Rotation invariants of vector fields from orthogonal moments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Bo; Kostková, Jitka; Flusser, Jan
Vector field images are a type of new multidimensional data that appear in many engineering areas. Although the vector fields can be visualized as images, they differ from graylevel and color images in several aspects. In order to analyze them, special methods and algorithms must be originally developed or substantially adapted from the traditional image processing area. Here, we propose a method for the description and matching of vector field patterns under an unknown rotation of the field. Rotation of a vector field is so-called total rotation, where the action is applied not only on the spatial coordinates but alsomore » on the field values. Invariants of vector fields with respect to total rotation constructed from orthogonal Gaussian–Hermite moments and Zernike moments are introduced. Their numerical stability is shown to be better than that of the invariants published so far. We demonstrate their usefulness in a real world template matching application of rotated vector fields.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciani, A.; Kewish, C. M.; Guizar-Sicairos, M.
A newly developed data processing method able to characterize the osteocytes lacuno-canalicular network (LCN) is presented. Osteocytes are the most abundant cells in the bone, living in spaces called lacunae embedded inside the bone matrix and connected to each other with an extensive network of canals that allows for the exchange of nutrients and for mechanotransduction functions. The geometrical three-dimensional (3D) architecture is increasingly thought to be related to the macroscopic strength or failure of the bone and it is becoming the focus for investigating widely spread diseases such as osteoporosis. To obtain 3D LCN images non-destructively has been outmore » of reach until recently, since tens-of-nanometers scale resolution is required. Ptychographic tomography was validated for bone imaging in [1], showing clearly the LCN. The method presented here was applied to 3D ptychographic tomographic images in order to extract morphological and geometrical parameters of the lacuno-canalicular structures.« less
Hom, Erik F. Y.; Marchis, Franck; Lee, Timothy K.; Haase, Sebastian; Agard, David A.; Sedat, John W.
2011-01-01
We describe an adaptive image deconvolution algorithm (AIDA) for myopic deconvolution of multi-frame and three-dimensional data acquired through astronomical and microscopic imaging. AIDA is a reimplementation and extension of the MISTRAL method developed by Mugnier and co-workers and shown to yield object reconstructions with excellent edge preservation and photometric precision [J. Opt. Soc. Am. A 21, 1841 (2004)]. Written in Numerical Python with calls to a robust constrained conjugate gradient method, AIDA has significantly improved run times over the original MISTRAL implementation. Included in AIDA is a scheme to automatically balance maximum-likelihood estimation and object regularization, which significantly decreases the amount of time and effort needed to generate satisfactory reconstructions. We validated AIDA using synthetic data spanning a broad range of signal-to-noise ratios and image types and demonstrated the algorithm to be effective for experimental data from adaptive optics–equipped telescope systems and wide-field microscopy. PMID:17491626
Laser scanning saturated structured illumination microscopy based on phase modulation
NASA Astrophysics Data System (ADS)
Huang, Yujia; Zhu, Dazhao; Jin, Luhong; Kuang, Cuifang; Xu, Yingke; Liu, Xu
2017-08-01
Wide-field saturated structured illumination microscopy has not been widely used due to the requirement of high laser power. We propose a novel method called laser scanning saturated structured illumination microscopy (LS-SSIM), which introduces high order of harmonics frequency and greatly reduces the required laser power for SSIM imaging. To accomplish that, an excitation PSF with two peaks is generated and scanned along different directions on the sample. Raw images are recorded cumulatively by a CCD detector and then reconstructed to form a high-resolution image with extended optical transfer function (OTF). Our theoretical analysis and simulation results show that LS-SSIM method reaches a resolution of 0.16 λ, equivalent to 2.7-fold resolution than conventional wide-field microscopy. In addition, LS-SSIM greatly improves the optical sectioning capability of conventional wide-field illumination system by diminishing our-of-focus light. Furthermore, this modality has the advantage of implementation in multi-photon microscopy with point scanning excitation to image samples in greater depths.
Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu
2015-01-01
It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146
Collaborative learning using Internet2 and remote collections of stereo dissection images.
Dev, Parvati; Srivastava, Sakti; Senger, Steven
2006-04-01
We have investigated collaborative learning of anatomy over Internet2, using an application called remote stereo viewer (RSV). This application offers a unique method of teaching anatomy, using high-resolution stereoscopic images, in a client-server architecture. Rotated sequences of stereo image pairs were produced by volumetric rendering of the Visible female and by dissecting and photographing a cadaveric hand. A client-server application (RSV) was created to provide access to these image sets, using a highly interactive interface. The RSV system was used to provide a "virtual anatomy" session for students in the Stanford Medical School Gross Anatomy course. The RSV application allows both independent and collaborative modes of viewing. The most appealing aspects of the RSV application were the capacity for stereoscopic viewing and the potential to access the content remotely within a flexible temporal framework. The RSV technology, used over Internet2, thus serves as an effective complement to traditional methods of teaching gross anatomy. (c) 2006 Wiley-Liss, Inc.
Ku, Taeyun; Swaney, Justin; Park, Jeong-Yoon; Albanese, Alexandre; Murray, Evan; Cho, Jae Hun; Park, Young-Gyun; Mangena, Vamsi; Chen, Jiapei; Chung, Kwanghun
2016-09-01
The biology of multicellular organisms is coordinated across multiple size scales, from the subnanoscale of molecules to the macroscale, tissue-wide interconnectivity of cell populations. Here we introduce a method for super-resolution imaging of the multiscale organization of intact tissues. The method, called magnified analysis of the proteome (MAP), linearly expands entire organs fourfold while preserving their overall architecture and three-dimensional proteome organization. MAP is based on the observation that preventing crosslinking within and between endogenous proteins during hydrogel-tissue hybridization allows for natural expansion upon protein denaturation and dissociation. The expanded tissue preserves its protein content, its fine subcellular details, and its organ-scale intercellular connectivity. We use off-the-shelf antibodies for multiple rounds of immunolabeling and imaging of a tissue's magnified proteome, and our experiments demonstrate a success rate of 82% (100/122 antibodies tested). We show that specimen size can be reversibly modulated to image both inter-regional connections and fine synaptic architectures in the mouse brain.
Rotation invariants of vector fields from orthogonal moments
Yang, Bo; Kostková, Jitka; Flusser, Jan; ...
2017-09-11
Vector field images are a type of new multidimensional data that appear in many engineering areas. Although the vector fields can be visualized as images, they differ from graylevel and color images in several aspects. In order to analyze them, special methods and algorithms must be originally developed or substantially adapted from the traditional image processing area. Here, we propose a method for the description and matching of vector field patterns under an unknown rotation of the field. Rotation of a vector field is so-called total rotation, where the action is applied not only on the spatial coordinates but alsomore » on the field values. Invariants of vector fields with respect to total rotation constructed from orthogonal Gaussian–Hermite moments and Zernike moments are introduced. Their numerical stability is shown to be better than that of the invariants published so far. We demonstrate their usefulness in a real world template matching application of rotated vector fields.« less
NASA Astrophysics Data System (ADS)
Ciani, A.; Guizar-Sicairos, M.; Diaz, A.; Holler, M.; Pallu, S.; Achiou, Z.; Jennane, R.; Toumi, H.; Lespessailles, E.; Kewish, C. M.
2016-01-01
A newly developed data processing method able to characterize the osteocytes lacuno-canalicular network (LCN) is presented. Osteocytes are the most abundant cells in the bone, living in spaces called lacunae embedded inside the bone matrix and connected to each other with an extensive network of canals that allows for the exchange of nutrients and for mechanotransduction functions. The geometrical three-dimensional (3D) architecture is increasingly thought to be related to the macroscopic strength or failure of the bone and it is becoming the focus for investigating widely spread diseases such as osteoporosis. To obtain 3D LCN images non-destructively has been out of reach until recently, since tens-of-nanometers scale resolution is required. Ptychographic tomography was validated for bone imaging in [1], showing clearly the LCN. The method presented here was applied to 3D ptychographic tomographic images in order to extract morphological and geometrical parameters of the lacuno-canalicular structures.
Super-Resolution Image Reconstruction Applied to Medical Ultrasound
NASA Astrophysics Data System (ADS)
Ellis, Michael
Ultrasound is the preferred imaging modality for many diagnostic applications due to its real-time image reconstruction and low cost. Nonetheless, conventional ultrasound is not used in many applications because of limited spatial resolution and soft tissue contrast. Most commercial ultrasound systems reconstruct images using a simple delay-and-sum architecture on receive, which is fast and robust but does not utilize all information available in the raw data. Recently, more sophisticated image reconstruction methods have been developed that make use of far more information in the raw data to improve resolution and contrast. One such method is the Time-Domain Optimized Near-Field Estimator (TONE), which employs a maximum a priori estimation to solve a highly underdetermined problem, given a well-defined system model. TONE has been shown to significantly improve both the contrast and resolution of ultrasound images when compared to conventional methods. However, TONE's lack of robustness to variations from the system model and extremely high computational cost hinder it from being readily adopted in clinical scanners. This dissertation aims to reduce the impact of TONE's shortcomings, transforming it from an academic construct to a clinically viable image reconstruction algorithm. By altering the system model from a collection of individual hypothetical scatterers to a collection of weighted, diffuse regions, dTONE is able to achieve much greater robustness to modeling errors. A method for efficient parallelization of dTONE is presented that reduces reconstruction time by more than an order of magnitude with little loss in image fidelity. An alternative reconstruction algorithm, called qTONE, is also developed and is able to reduce reconstruction times by another two orders of magnitude while simultaneously improving image contrast. Each of these methods for improving TONE are presented, their limitations are explored, and all are used in concert to reconstruct in vivo images of a human testicle. In all instances, the methods presented here outperform conventional image reconstruction methods by a significant margin. As TONE and its variants are general image reconstruction techniques, the theories and research presented here have the potential to significantly improve not only ultrasound's clinical utility, but that of other imaging modalities as well.
Coating on Rock Beside a Young Martian Crater
2010-03-24
This image from the microscopic imager on NASA Mars Exploration Rover Opportunity shows details of the coating on a rock called Chocolate Hills, which the rover found and examined at the edge of a young crater called Concepción.
Gaucher cell, photomicrograph (image)
Gaucher disease is called a "lipid storage disease" where abnormal amounts of lipids called "glycosphingolipids" are stored in special cells called reticuloendothelial cells. Classically, the nucleus is ...
The compression and storage method of the same kind of medical images: DPCM
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong
2006-09-01
Medical imaging has started to take advantage of digital technology, opening the way for advanced medical imaging and teleradiology. Medical images, however, require large amounts of memory. At over 1 million bytes per image, a typical hospital needs a staggering amount of memory storage (over one trillion bytes per year), and transmitting an image over a network (even the promised superhighway) could take minutes--too slow for interactive teleradiology. This calls for image compression to reduce significantly the amount of data needed to represent an image. Several compression techniques with different compression ratio have been developed. However, the lossless techniques, which allow for perfect reconstruction of the original images, yield modest compression ratio, while the techniques that yield higher compression ratio are lossy, that is, the original image is reconstructed only approximately. Medical imaging poses the great challenge of having compression algorithms that are lossless (for diagnostic and legal reasons) and yet have high compression ratio for reduced storage and transmission time. To meet this challenge, we are developing and studying some compression schemes, which are either strictly lossless or diagnostically lossless, taking advantage of the peculiarities of medical images and of the medical practice. In order to increase the Signal to Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) is presented.
Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael
2018-06-01
To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric thoracic scan. For the ACR phantom, image quality was comparable to clinical reconstructions as well as reconstructions using open-source FreeCT_wFBP software. The pediatric thoracic scan also yielded acceptable results. In addition, we did not observe any deleterious impact in image quality associated with the utilization of rotating slices. These evaluations also demonstrated reasonable tradeoffs in storage requirements and computational demands. FreeCT_ICD is an open-source implementation of a model-based iterative reconstruction method that extends the capabilities of previously released open source reconstruction software and provides the ability to perform vendor-independent reconstructions of clinically acquired raw projection data. This implementation represents a reasonable tradeoff between storage and computational requirements and has demonstrated acceptable image quality in both simulated and clinical image datasets. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.
Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F
2012-09-01
Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.
NASA Astrophysics Data System (ADS)
Hsu, Chih-Yu; Huang, Hsuan-Yu; Lee, Lin-Tsang
2010-12-01
The paper propose a new procedure including four stages in order to preserve the desired edges during the image processing of noise reduction. A denoised image can be obtained from a noisy image at the first stage of the procedure. At the second stage, an edge map can be obtained by the Canny edge detector to find the edges of the object contours. Manual modification of an edge map at the third stage is optional to capture all the desired edges of the object contours. At the final stage, a new method called Edge Preserved Inhomogeneous Diffusion Equation (EPIDE) is used to smooth the noisy images or the previously denoised image at the first stage for achieving the edge preservation. The Optical Character Recognition (OCR) results in the experiments show that the proposed procedure has the best recognition result because of the capability of edge preservation.
The composite classification problem in optical information processing
NASA Technical Reports Server (NTRS)
Hall, Eric B.
1995-01-01
Optical pattern recognition allows objects to be recognized from their images and permits their positional parameters to be estimated accurately in real time. The guiding principle behind optical pattern recognition is that a lens focusing a beam of coherent light modulated with an image produces the two-dimensinal Fourier transform of that image. When the resulting output is further transformed by the matched filter corresponding to the original image, one obtains the autocorrelation function of the original image, which has a peak at the origin. Such a device is called an optical correlator and may be used to recognize the locate the image for which it is designed. (From a practical perspective, an approximation to the matched filter must be used since the spatial light modulator (SLM) on which the filter is implemented usually does not allow one to independently control both the magnitude and phase of the filter.) Generally, one is not just concerned with recognizing a single image but is interested in recognizing a variety of rotated and scaled views of a particular image. In order to recognize these different views using an optical correlator, one may select a subset of these views (whose elements are called training images) and then use a composite filter that is designed to produce a correlation peak for each training image. Presumably, these peaks should be sharp and easily distinguishable from the surrounding correlation plane values. In this report we consider two areas of research regarding composite optical correlators. First, we consider the question of how best to choose the training images that are used to design the composite filter. With regard to quantity, the number of training images should be large enough to adequately represent all possible views of the targeted object yet small enough to ensure that the resolution of the filter is not exhausted. As for the images themselves, they should be distinct enough to avoid numerical difficulties yet similar enough to avoid gaps in which certain views of the target will be unrecognized. One method that we introduce to study this problem is called probing and involves the creation of the artificial imagery. The second problem we consider involves the clasification of the composite filter's correlation plane data. In particular, we would like to determine not only whether or not we are viewing a training image, but, in the former case, we would like to determine which training image is being viewed. This second problem is investigated using traditional M-ary hypothesis testing techniques.
3D Lunar Terrain Reconstruction from Apollo Images
NASA Technical Reports Server (NTRS)
Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.
2009-01-01
Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission
Local Subspace Classifier with Transform-Invariance for Image Classification
NASA Astrophysics Data System (ADS)
Hotta, Seiji
A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.
Classification Comparisons Between Compact Polarimetric and Quad-Pol SAR Imagery
NASA Astrophysics Data System (ADS)
Souissi, Boularbah; Doulgeris, Anthony P.; Eltoft, Torbjørn
2015-04-01
Recent interest in dual-pol SAR systems has lead to a novel approach, the so-called compact polarimetric imaging mode (CP) which attempts to reconstruct fully polarimetric information based on a few simple assumptions. In this work, the CP image is simulated from the full quad-pol (QP) image. We present here the initial comparison of polarimetric information content between QP and CP imaging modes. The analysis of multi-look polarimetric covariance matrix data uses an automated statistical clustering method based upon the expectation maximization (EM) algorithm for finite mixture modeling, using the complex Wishart probability density function. Our results showed that there are some different characteristics between the QP and CP modes. The classification is demonstrated using a E-SAR and Radarsat2 polarimetric SAR images acquired over DLR Oberpfaffenhofen in Germany and Algiers in Algeria respectively.
3D quantitative phase imaging of neural networks using WDT
NASA Astrophysics Data System (ADS)
Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel
2015-03-01
White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.
Scheimpflug with computational imaging to extend the depth of field of iris recognition systems
NASA Astrophysics Data System (ADS)
Sinharoy, Indranil
Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.
Pandora Cluster Seen by Spitzer
2016-09-28
This image of galaxy cluster Abell 2744, also called Pandora's Cluster, was taken by the Spitzer Space Telescope. The gravity of this galaxy cluster is strong enough that it acts as a lens to magnify images of more distant background galaxies. This technique is called gravitational lensing. The fuzzy blobs in this Spitzer image are the massive galaxies at the core of this cluster, but astronomers will be poring over the images in search of the faint streaks of light created where the cluster magnifies a distant background galaxy. The cluster is also being studied by NASA's Hubble Space Telescope and Chandra X-Ray Observatory in a collaboration called the Frontier Fields project. In this image, light from Spitzer's infrared channels is colored blue at 3.6 microns and green at 4.5 microns. http://photojournal.jpl.nasa.gov/catalog/PIA20920
Liu, Fang; Zhou, Zhaoye; Jang, Hyungseok; Samsonov, Alexey; Zhao, Gengyan; Kijowski, Richard
2018-04-01
To describe and evaluate a new fully automated musculoskeletal tissue segmentation method using deep convolutional neural network (CNN) and three-dimensional (3D) simplex deformable modeling to improve the accuracy and efficiency of cartilage and bone segmentation within the knee joint. A fully automated segmentation pipeline was built by combining a semantic segmentation CNN and 3D simplex deformable modeling. A CNN technique called SegNet was applied as the core of the segmentation method to perform high resolution pixel-wise multi-class tissue classification. The 3D simplex deformable modeling refined the output from SegNet to preserve the overall shape and maintain a desirable smooth surface for musculoskeletal structure. The fully automated segmentation method was tested using a publicly available knee image data set to compare with currently used state-of-the-art segmentation methods. The fully automated method was also evaluated on two different data sets, which include morphological and quantitative MR images with different tissue contrasts. The proposed fully automated segmentation method provided good segmentation performance with segmentation accuracy superior to most of state-of-the-art methods in the publicly available knee image data set. The method also demonstrated versatile segmentation performance on both morphological and quantitative musculoskeletal MR images with different tissue contrasts and spatial resolutions. The study demonstrates that the combined CNN and 3D deformable modeling approach is useful for performing rapid and accurate cartilage and bone segmentation within the knee joint. The CNN has promising potential applications in musculoskeletal imaging. Magn Reson Med 79:2379-2391, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Gaucher cell, photomicrograph #2 (image)
Gaucher disease is called a "lipid storage disease" where abnormal amounts of lipids called "glycosphingolipids" are stored in special cells called reticuloendothelial cells. Classically, the nucleus is ...
NASA Astrophysics Data System (ADS)
Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru
2007-02-01
Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.
Gravity assisted recovery of liquid xenon at large mass flow rates
NASA Astrophysics Data System (ADS)
Virone, L.; Acounis, S.; Beaupère, N.; Beney, J.-L.; Bert, J.; Bouvier, S.; Briend, P.; Butterworth, J.; Carlier, T.; Chérel, M.; Crespi, P.; Cussonneau, J.-P.; Diglio, S.; Manzano, L. Gallego; Giovagnoli, D.; Gossiaux, P.-B.; Kraeber-Bodéré, F.; Ray, P. Le; Lefèvre, F.; Marty, P.; Masbou, J.; Morteau, E.; Picard, G.; Roy, D.; Staempflin, M.; Stutzmann, J.-S.; Visvikis, D.; Xing, Y.; Zhu, Y.; Thers, D.
2018-06-01
We report on a liquid xenon gravity assisted recovery method for nuclear medical imaging applications. The experimental setup consists of an elevated detector enclosed in a cryostat connected to a storage tank called ReStoX. Both elements are part of XEMIS2 (XEnon Medical Imaging System): an innovative medical imaging facility for pre-clinical research that uses pure liquid xenon as detection medium. Tests based on liquid xenon transfer from the detector to ReStoX have been successfully performed showing that an unprecedented mass flow rate close to 1 ton per hour can be reached. This promising achievement as well as future areas of improvement will be discussed in this paper.
MToS: A Tree of Shapes for Multivariate Images.
Carlinet, Edwin; Géraud, Thierry
2015-12-01
The topographic map of a gray-level image, also called tree of shapes, provides a high-level hierarchical representation of the image contents. This representation, invariant to contrast changes and to contrast inversion, has been proved very useful to achieve many image processing and pattern recognition tasks. Its definition relies on the total ordering of pixel values, so this representation does not exist for color images, or more generally, multivariate images. Common workarounds, such as marginal processing, or imposing a total order on data, are not satisfactory and yield many problems. This paper presents a method to build a tree-based representation of multivariate images, which features marginally the same properties of the gray-level tree of shapes. Briefly put, we do not impose an arbitrary ordering on values, but we only rely on the inclusion relationship between shapes in the image definition domain. The interest of having a contrast invariant and self-dual representation of multivariate image is illustrated through several applications (filtering, segmentation, and object recognition) on different types of data: color natural images, document images, satellite hyperspectral imaging, multimodal medical imaging, and videos.
Estimating the signal-to-noise ratio of AVIRIS data
NASA Technical Reports Server (NTRS)
Curran, Paul J.; Dungan, Jennifer L.
1988-01-01
To make the best use of narrowband airborne visible/infrared imaging spectrometer (AVIRIS) data, an investigator needs to know the ratio of signal to random variability or noise (signal-to-noise ratio or SNR). The signal is land cover dependent and varies with both wavelength and atmospheric absorption; random noise comprises sensor noise and intrapixel variability (i.e., variability within a pixel). The three existing methods for estimating the SNR are inadequate, since typical laboratory methods inflate while dark current and image methods deflate the SNR. A new procedure is proposed called the geostatistical method. It is based on the removal of periodic noise by notch filtering in the frequency domain and the isolation of sensor noise and intrapixel variability using the semi-variogram. This procedure was applied easily and successfully to five sets of AVIRIS data from the 1987 flying season and could be applied to remotely sensed data from broadband sensors.
Analysis of 3-D Tongue Motion From Tagged and Cine Magnetic Resonance Images
Woo, Jonghye; Lee, Junghoon; Murano, Emi Z.; Stone, Maureen; Prince, Jerry L.
2016-01-01
Purpose Measuring tongue deformation and internal muscle motion during speech has been a challenging task because the tongue deforms in 3 dimensions, contains interdigitated muscles, and is largely hidden within the vocal tract. In this article, a new method is proposed to analyze tagged and cine magnetic resonance images of the tongue during speech in order to estimate 3-dimensional tissue displacement and deformation over time. Method The method involves computing 2-dimensional motion components using a standard tag-processing method called harmonic phase, constructing superresolution tongue volumes using cine magnetic resonance images, segmenting the tongue region using a random-walker algorithm, and estimating 3-dimensional tongue motion using an incompressible deformation estimation algorithm. Results Evaluation of the method is presented with a control group and a group of people who had received a glossectomy carrying out a speech task. A 2-step principal-components analysis is then used to reveal the unique motion patterns of the subjects. Azimuth motion angles and motion on the mirrored hemi-tongues are analyzed. Conclusion Tests of the method with a various collection of subjects show its capability of capturing patient motion patterns and indicate its potential value in future speech studies. PMID:27295428
Active browsing using similarity pyramids
NASA Astrophysics Data System (ADS)
Chen, Jau-Yuen; Bouman, Charles A.; Dalton, John C.
1998-12-01
In this paper, we describe a new approach to managing large image databases, which we call active browsing. Active browsing integrates relevance feedback into the browsing environment, so that users can modify the database's organization to suit the desired task. Our method is based on a similarity pyramid data structure, which hierarchically organizes the database, so that it can be efficiently browsed. At coarse levels, the similarity pyramid allows users to view the database as large clusters of similar images. Alternatively, users can 'zoom into' finer levels to view individual images. We discuss relevance feedback for the browsing process, and argue that it is fundamentally different from relevance feedback for more traditional search-by-query tasks. We propose two fundamental operations for active browsing: pruning and reorganization. Both of these operations depend on a user-defined relevance set, which represents the image or set of images desired by the user. We present statistical methods for accurately pruning the database, and we propose a new 'worm hole' distance metric for reorganizing the database, so that members of the relevance set are grouped together.
High resolution surface plasmon microscopy for cell imaging
NASA Astrophysics Data System (ADS)
Argoul, F.; Monier, K.; Roland, T.; Elezgaray, J.; Berguiga, L.
2010-04-01
We introduce a new non-labeling high resolution microscopy method for cellular imaging. This method called SSPM (Scanning Surface Plasmon Microscopy) pushes down the resolution limit of surface plasmon resonance imaging (SPRi) to sub-micronic scales. High resolution SPRi is obtained by the surface plasmon lauching with a high numerical aperture objective lens. The advantages of SPPM compared to other high resolution SPRi's rely on three aspects; (i) the interferometric detection of the back reflected light after plasmon excitation, (ii) the twodimensional scanning of the sample for image reconstruction, (iii) the radial polarization of light, enhancing both resolution and sensitivity. This microscope can afford a lateral resolution of - 150 nm in liquid environment and - 200 nm in air. We present in this paper images of IMR90 fibroblasts obtained with SSPM in dried environment. Internal compartments such as nucleus, nucleolus, mitochondria, cellular and nuclear membrane can be recognized without labelling. We propose an interpretation of the ability of SSPM to reveal high index contrast zones by a local decomposition of the V (Z) function describing the response of the SSPM.
Wang, Yuanguo; Zheng, Chichao; Peng, Hu; Chen, Qiang
2018-06-12
The beamforming performance has a large impact on image quality in ultrasound imaging. Previously, several adaptive weighting factors including coherence factor (CF) and generalized coherence factor (GCF) have been proposed to improved image resolution and contrast. In this paper, we propose a new adaptive weighting factor for ultrasound imaging, which is called signal mean-to-standard-deviation factor (SMSF). SMSF is defined as the mean-to-standard-deviation of the aperture data and is used to weight the output of delay-and-sum (DAS) beamformer before image formation. Moreover, we develop a robust SMSF (RSMSF) by extending the SMSF to the spatial frequency domain using an altered spectrum of the aperture data. In addition, a square neighborhood average is applied on the RSMSF to offer a more smoothed square neighborhood RSMSF (SN-RSMSF) value. We compared our methods with DAS, CF, and GCF using simulated and experimental synthetic aperture data sets. The quantitative results show that SMSF results in an 82% lower full width at half-maximum (FWHM) but a 12% lower contrast ratio (CR) compared with CF. Moreover, the SN-RSMSF leads to 15% and 10% improvement, on average, in FWHM and CR compared with GCF while maintaining the speckle quality. This demonstrates that the proposed methods can effectively improve the image resolution and contrast. Copyright © 2018 Elsevier B.V. All rights reserved.
Kawano, Yoshihiro; Higgins, Christopher; Yamamoto, Yasuhito; Nyhus, Julie; Bernard, Amy; Dong, Hong-Wei; Karten, Harvey J.; Schilling, Tobias
2013-01-01
We present a new method for whole slide darkfield imaging. Whole Slide Imaging (WSI), also sometimes called virtual slide or virtual microscopy technology, produces images that simultaneously provide high resolution and a wide field of observation that can encompass the entire section, extending far beyond any single field of view. For example, a brain slice can be imaged so that both overall morphology and individual neuronal detail can be seen. We extended the capabilities of traditional whole slide systems and developed a prototype system for darkfield internal reflection illumination (DIRI). Our darkfield system uses an ultra-thin light-emitting diode (LED) light source to illuminate slide specimens from the edge of the slide. We used a new type of side illumination, a variation on the internal reflection method, to illuminate the specimen and create a darkfield image. This system has four main advantages over traditional darkfield: (1) no oil condenser is required for high resolution imaging (2) there is less scatter from dust and dirt on the slide specimen (3) there is less halo, providing a more natural darkfield contrast image, and (4) the motorized system produces darkfield, brightfield and fluorescence images. The WSI method sometimes allows us to image using fewer stains. For instance, diaminobenzidine (DAB) and fluorescent staining are helpful tools for observing protein localization and volume in tissues. However, these methods usually require counter-staining in order to visualize tissue structure, limiting the accuracy of localization of labeled cells within the complex multiple regions of typical neurohistological preparations. Darkfield imaging works on the basis of light scattering from refractive index mismatches in the sample. It is a label-free method of producing contrast in a sample. We propose that adapting darkfield imaging to WSI is very useful, particularly when researchers require additional structural information without the use of further staining. PMID:23520500
Diverse Region-Based CNN for Hyperspectral Image Classification.
Zhang, Mengmeng; Li, Wei; Du, Qian
2018-06-01
Convolutional neural network (CNN) is of great interest in machine learning and has demonstrated excellent performance in hyperspectral image classification. In this paper, we propose a classification framework, called diverse region-based CNN, which can encode semantic context-aware representation to obtain promising features. With merging a diverse set of discriminative appearance factors, the resulting CNN-based representation exhibits spatial-spectral context sensitivity that is essential for accurate pixel classification. The proposed method exploiting diverse region-based inputs to learn contextual interactional features is expected to have more discriminative power. The joint representation containing rich spectral and spatial information is then fed to a fully connected network and the label of each pixel vector is predicted by a softmax layer. Experimental results with widely used hyperspectral image data sets demonstrate that the proposed method can surpass any other conventional deep learning-based classifiers and other state-of-the-art classifiers.
The Goddard Profiling Algorithm (GPROF): Description and Current Applications
NASA Technical Reports Server (NTRS)
Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea
2004-01-01
Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.
Detection of Tephra Layers in Antarctic Sediment Cores with Hyperspectral Imaging
Aymerich, Ismael F.; Oliva, Marc; Giralt, Santiago; Martín-Herrero, Julio
2016-01-01
Tephrochronology uses recognizable volcanic ash layers (from airborne pyroclastic deposits, or tephras) in geological strata to set unique time references for paleoenvironmental events across wide geographic areas. This involves the detection of tephra layers which sometimes are not evident to the naked eye, including the so-called cryptotephras. Tests that are expensive, time-consuming, and/or destructive are often required. Destructive testing for tephra layers of cores from difficult regions, such as Antarctica, which are useful sources of other kinds of information beyond tephras, is always undesirable. Here we propose hyperspectral imaging of cores, Self-Organizing Map (SOM) clustering of the preprocessed spectral signatures, and spatial analysis of the classified images as a convenient, fast, non-destructive method for tephra detection. We test the method in five sediment cores from three Antarctic lakes, and show its potential for detection of tephras and cryptotephras. PMID:26815202
Shrink-wrapped isosurface from cross sectional images
Choi, Y. K.; Hahn, J. K.
2010-01-01
Summary This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images. PMID:20703361
NASA Astrophysics Data System (ADS)
Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele
2018-05-01
A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.
Detection of shifted double JPEG compression by an adaptive DCT coefficient model
NASA Astrophysics Data System (ADS)
Wang, Shi-Lin; Liew, Alan Wee-Chung; Li, Sheng-Hong; Zhang, Yu-Jin; Li, Jian-Hua
2014-12-01
In many JPEG image splicing forgeries, the tampered image patch has been JPEG-compressed twice with different block alignments. Such phenomenon in JPEG image forgeries is called the shifted double JPEG (SDJPEG) compression effect. Detection of SDJPEG-compressed patches could help in detecting and locating the tampered region. However, the current SDJPEG detection methods do not provide satisfactory results especially when the tampered region is small. In this paper, we propose a new SDJPEG detection method based on an adaptive discrete cosine transform (DCT) coefficient model. DCT coefficient distributions for SDJPEG and non-SDJPEG patches have been analyzed and a discriminative feature has been proposed to perform the two-class classification. An adaptive approach is employed to select the most discriminative DCT modes for SDJPEG detection. The experimental results show that the proposed approach can achieve much better results compared with some existing approaches in SDJPEG patch detection especially when the patch size is small.
Rolling Shutter Effect aberration compensation in Digital Holographic Microscopy
NASA Astrophysics Data System (ADS)
Monaldi, Andrea C.; Romero, Gladis G.; Cabrera, Carlos M.; Blanc, Adriana V.; Alanís, Elvio E.
2016-05-01
Due to the sequential-readout nature of most CMOS sensors, each row of the sensor array is exposed at a different time, resulting in the so-called rolling shutter effect that induces geometric distortion to the image if the video camera or the object moves during image acquisition. Particularly in digital holograms recording, while the sensor captures progressively each row of the hologram, interferometric fringes can oscillate due to external vibrations and/or noises even when the object under study remains motionless. The sensor records each hologram row in different instants of these disturbances. As a final effect, phase information is corrupted, distorting the reconstructed holograms quality. We present a fast and simple method for compensating this effect based on image processing tools. The method is exemplified by holograms of microscopic biological static objects. Results encourage incorporating CMOS sensors over CCD in Digital Holographic Microscopy due to a better resolution and less expensive benefits.
Real-time catheter localization and visualization using three-dimensional echocardiography
NASA Astrophysics Data System (ADS)
Kozlowski, Pawel; Bandaru, Raja Sekhar; D'hooge, Jan; Samset, Eigil
2017-03-01
Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.
LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS
Einstein, Daniel R.; Dyedov, Vladimir
2010-01-01
Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546
DAX - The Next Generation: Towards One Million Processes on Commodity Hardware.
Damon, Stephen M; Boyd, Brian D; Plassard, Andrew J; Taylor, Warren; Landman, Bennett A
2017-01-01
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with >100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner.
DAX - the next generation: towards one million processes on commodity hardware
NASA Astrophysics Data System (ADS)
Damon, Stephen M.; Boyd, Brian D.; Plassard, Andrew J.; Taylor, Warren; Landman, Bennett A.
2017-03-01
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with <100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner.
DAX - The Next Generation: Towards One Million Processes on Commodity Hardware
Boyd, Brian D.; Plassard, Andrew J.; Taylor, Warren; Landman, Bennett A.
2017-01-01
Large scale image processing demands a standardized way of not only storage but also a method for job distribution and scheduling. The eXtensible Neuroimaging Archive Toolkit (XNAT) is one of several platforms that seeks to solve the storage issues. Distributed Automation for XNAT (DAX) is a job control and distribution manager. Recent massive data projects have revealed several bottlenecks for projects with >100,000 assessors (i.e., data processing pipelines in XNAT). In order to address these concerns, we have developed a new API, which exposes a direct connection to the database rather than REST API calls to accomplish the generation of assessors. This method, consistent with XNAT, keeps a full history for auditing purposes. Additionally, we have optimized DAX to keep track of processing status on disk (called DISKQ) rather than on XNAT, which greatly reduces load on XNAT by vastly dropping the number of API calls. Finally, we have integrated DAX into a Docker container with the idea of using it as a Docker controller to launch Docker containers of image processing pipelines. Using our new API, we reduced the time to create 1,000 assessors (a sub-cohort of our case project) from 65040 seconds to 229 seconds (a decrease of over 270 fold). DISKQ, using pyXnat, allows launching of 400 jobs in under 10 seconds which previously took 2,000 seconds. Together these updates position DAX to support projects with hundreds of thousands of scans and to run them in a time-efficient manner. PMID:28919661
NASA Astrophysics Data System (ADS)
Bourgeat, Pierrick; Dore, Vincent; Fripp, Jurgen; Villemagne, Victor L.; Rowe, Chris C.; Salvado, Olivier
2015-03-01
With the advances of PET tracers for β-Amyloid (Aβ) detection in neurodegenerative diseases, automated quantification methods are desirable. For clinical use, there is a great need for PET-only quantification method, as MR images are not always available. In this paper, we validate a previously developed PET-only quantification method against MR-based quantification using 6 tracers: 18F-Florbetaben (N=148), 18F-Florbetapir (N=171), 18F-NAV4694 (N=47), 18F-Flutemetamol (N=180), 11C-PiB (N=381) and 18F-FDG (N=34). The results show an overall mean absolute percentage error of less than 5% for each tracer. The method has been implemented as a remote service called CapAIBL (http://milxcloud.csiro.au/capaibl). PET images are uploaded to a cloud platform where they are spatially normalised to a standard template and quantified. A report containing global as well as local quantification, along with surface projection of the β-Amyloid deposition is automatically generated at the end of the pipeline and emailed to the user.
Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S
2017-02-01
B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises
Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.
Jin, Qiyu; Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.
Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga
2017-11-01
Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.
High resolution imaging of a subsonic projectile using automated mirrors with large aperture
NASA Astrophysics Data System (ADS)
Tateno, Y.; Ishii, M.; Oku, H.
2017-02-01
Visual tracking of high-speed projectiles is required for studying the aerodynamics around the objects. One solution to this problem is a tracking method based on the so-called 1 ms Auto Pan-Tilt (1ms-APT) system that we proposed in previous work, which consists of rotational mirrors and a high-speed image processing system. However, the images obtained with that system did not have high enough resolution to realize detailed measurement of the projectiles because of the size of the mirrors. In this study, we propose a new system consisting of enlarged mirrors for tracking a high-speed projectiles so as to achieve higher-resolution imaging, and we confirmed the effectiveness of the system via an experiment in which a projectile flying at subsonic speed tracked.
Roi Detection and Vessel Segmentation in Retinal Image
NASA Astrophysics Data System (ADS)
Sabaz, F.; Atila, U.
2017-11-01
Diabetes disrupts work by affecting the structure of the eye and afterwards leads to loss of vision. Depending on the stage of disease that called diabetic retinopathy, there are sudden loss of vision and blurred vision problems. Automated detection of vessels in retinal images is a useful study to diagnose eye diseases, disease classification and other clinical trials. The shape and structure of the vessels give information about the severity of the disease and the stage of the disease. Automatic and fast detection of vessels allows for a quick diagnosis of the disease and the treatment process to start shortly. ROI detection and vessel extraction methods for retinal image are mentioned in this study. It is shown that the Frangi filter used in image processing can be successfully used in detection and extraction of vessels.
Implementation of Steiner point of fuzzy set.
Liang, Jiuzhen; Wang, Dejiang
2014-01-01
This paper deals with the implementation of Steiner point of fuzzy set. Some definitions and properties of Steiner point are investigated and extended to fuzzy set. This paper focuses on establishing efficient methods to compute Steiner point of fuzzy set. Two strategies of computing Steiner point of fuzzy set are proposed. One is called linear combination of Steiner points computed by a series of crisp α-cut sets of the fuzzy set. The other is an approximate method, which is trying to find the optimal α-cut set approaching the fuzzy set. Stability analysis of Steiner point of fuzzy set is also studied. Some experiments on image processing are given, in which the two methods are applied for implementing Steiner point of fuzzy image, and both strategies show their own advantages in computing Steiner point of fuzzy set.
NASA Astrophysics Data System (ADS)
Pelamatti, Alice; Goiffon, Vincent; Chabane, Aziouz; Magnan, Pierre; Virmontois, Cédric; Saint-Pé, Olivier; de Boisanger, Michel Breart
2016-11-01
The charge transfer time represents the bottleneck in terms of temporal resolution in Pinned Photodiode (PPD) CMOS image sensors. This work focuses on the modeling and estimation of this key parameter. A simple numerical model of charge transfer in PPDs is presented. The model is based on a Montecarlo simulation and takes into account both charge diffusion in the PPD and the effect of potential obstacles along the charge transfer path. This work also presents a new experimental approach for the estimation of the charge transfer time, called pulsed Storage Gate (SG) method. This method, which allows reproduction of a ;worst-case; transfer condition, is based on dedicated SG pixel structures and is particularly suitable to compare transfer efficiency performances for different pixel geometries.
NASA Astrophysics Data System (ADS)
Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.
2007-03-01
Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.
2002-05-21
The so-called Face on Mars can be seen slightly above center and to the right in this NASA Mars Odyssey image. This 3-km long knob was first imaged by NASA Viking spacecraft in the 1970 and to some resembled a face carved into the rocks of Mars.
Imaging strategies using focusing functions with applications to a North Sea field
NASA Astrophysics Data System (ADS)
da Costa Filho, C. A.; Meles, G. A.; Curtis, A.; Ravasi, M.; Kritski, A.
2018-04-01
Seismic methods are used in a wide variety of contexts to investigate subsurface Earth structures, and to explore and monitor resources and waste-storage reservoirs in the upper ˜100 km of the Earth's subsurface. Reverse-time migration (RTM) is one widely used seismic method which constructs high-frequency images of subsurface structures. Unfortunately, RTM has certain disadvantages shared with other conventional single-scattering-based methods, such as not being able to correctly migrate multiply scattered arrivals. In principle, the recently developed Marchenko methods can be used to migrate all orders of multiples correctly. In practice however, using Marchenko methods are costlier to compute than RTM—for a single imaging location, the cost of performing the Marchenko method is several times that of standard RTM, and performing RTM itself requires dedicated use of some of the largest computers in the world for individual data sets. A different imaging strategy is therefore required. We propose a new set of imaging methods which use so-called focusing functions to obtain images with few artifacts from multiply scattered waves, while greatly reducing the number of points across the image at which the Marchenko method need be applied. Focusing functions are outputs of the Marchenko scheme: they are solutions of wave equations that focus in time and space at particular surface or subsurface locations. However, they are mathematical rather than physical entities, being defined only in reference media that equal to the true Earth above their focusing depths but are homogeneous below. Here, we use these focusing functions as virtual source/receiver surface seismic surveys, the upgoing focusing function being the virtual received wavefield that is created when the downgoing focusing function acts as a spatially distributed source. These source/receiver wavefields are used in three imaging schemes: one allows specific individual reflectors to be selected and imaged. The other two schemes provide either targeted or complete images with distinct advantages over current RTM methods, such as fewer artifacts and artifacts that occur in different locations. The latter property allows the recently published `combined imaging' method to remove almost all artifacts. We show several examples to demonstrate the methods: acoustic 1-D and 2-D synthetic examples, and a 2-D line from an ocean bottom cable field data set. We discuss an extension to elastic media, which is illustrated by a 1.5-D elastic synthetic example.
Circular Data Images for Directional Data
NASA Technical Reports Server (NTRS)
Morpet, William J.
2004-01-01
Directional data includes vectors, points on a unit sphere, axis orientation, angular direction, and circular or periodic data. The theoretical statistics for circular data (random points on a unit circle) or spherical data (random points on a unit sphere) are a recent development. An overview of existing graphical methods for the display of directional data is given. Cross-over occurs when periodic data are measured on a scale for the measurement of linear variables. For example, if angle is represented by a linear color gradient changing uniformly from dark blue at -180 degrees to bright red at +180 degrees, the color image will be discontinuous at +180 degrees and -180 degrees, which are the same location. The resultant color would depend on the direction of approach to the cross-over point. A new graphical method for imaging directional data is described, which affords high resolution without color discontinuity from "cross-over". It is called the circular data image. The circular data image uses a circular color scale in which colors repeat periodically. Some examples of the circular data image include direction of earth winds on a global scale, rocket motor internal flow, earth global magnetic field direction, and rocket motor nozzle vector direction vs. time.
A Machine Learning Ensemble Classifier for Early Prediction of Diabetic Retinopathy.
S K, Somasundaram; P, Alli
2017-11-09
The main complication of diabetes is Diabetic retinopathy (DR), retinal vascular disease and it leads to the blindness. Regular screening for early DR disease detection is considered as an intensive labor and resource oriented task. Therefore, automatic detection of DR diseases is performed only by using the computational technique is the great solution. An automatic method is more reliable to determine the presence of an abnormality in Fundus images (FI) but, the classification process is poorly performed. Recently, few research works have been designed for analyzing texture discrimination capacity in FI to distinguish the healthy images. However, the feature extraction (FE) process was not performed well, due to the high dimensionality. Therefore, to identify retinal features for DR disease diagnosis and early detection using Machine Learning and Ensemble Classification method, called, Machine Learning Bagging Ensemble Classifier (ML-BEC) is designed. The ML-BEC method comprises of two stages. The first stage in ML-BEC method comprises extraction of the candidate objects from Retinal Images (RI). The candidate objects or the features for DR disease diagnosis include blood vessels, optic nerve, neural tissue, neuroretinal rim, optic disc size, thickness and variance. These features are initially extracted by applying Machine Learning technique called, t-distributed Stochastic Neighbor Embedding (t-SNE). Besides, t-SNE generates a probability distribution across high-dimensional images where the images are separated into similar and dissimilar pairs. Then, t-SNE describes a similar probability distribution across the points in the low-dimensional map. This lessens the Kullback-Leibler divergence among two distributions regarding the locations of the points on the map. The second stage comprises of application of ensemble classifiers to the extracted features for providing accurate analysis of digital FI using machine learning. In this stage, an automatic detection of DR screening system using Bagging Ensemble Classifier (BEC) is investigated. With the help of voting the process in ML-BEC, bagging minimizes the error due to variance of the base classifier. With the publicly available retinal image databases, our classifier is trained with 25% of RI. Results show that the ensemble classifier can achieve better classification accuracy (CA) than single classification models. Empirical experiments suggest that the machine learning-based ensemble classifier is efficient for further reducing DR classification time (CT).
NASA Astrophysics Data System (ADS)
Placko, Dominique; Bore, Thierry; Rivollet, Alain; Joubert, Pierre-Yves
2015-10-01
This paper deals with the problem of imaging defects in metallic structures through eddy current (EC) inspections, and proposes an original process for a possible tomographical crack evaluation. This process is based on a semi analytical modeling, called "distributed point source method" (DPSM) which is used to describe and equate the interactions between the implemented EC probes and the structure under test. Several steps will be successively described, illustrating the feasibility of this new imaging process dedicated to the quantitative evaluation of defects. The basic principles of this imaging process firstly consist in creating a 3D grid by meshing the volume potentially inspected by the sensor. As a result, a given number of elemental volumes (called voxels) are obtained. Secondly, the DPSM modeling is used to compute an image for all occurrences in which only one of the voxels has a different conductivity among all the other ones. The assumption consists to consider that a real defect may be truly represented by a superimposition of elemental voxels: the resulting accuracy will naturally depend on the density of space sampling. On other hand, the excitation device of the EC imager has the capability to be oriented in several directions, and driven by an excitation current at variable frequency. So, the simulation will be performed for several frequencies and directions of the eddy currents induced in the structure, which increases the signal entropy. All these results are merged in a so-called "observation matrix" containing all the probe/structure interaction configurations. This matrix is then used in an inversion scheme in order to perform the evaluation of the defect location and geometry. The modeled EC data provided by the DPSM are compared to the experimental images provided by an eddy current imager (ECI), implemented on aluminum plates containing some buried defects. In order to validate the proposed inversion process, we feed it with computed images of various acquisition configurations. Additive noise was added to the images so that they are more representative of actual EC data. In the case of simple notch type defects, for which the relative conductivity may only take two extreme values (1 or 0), a threshold was introduced on the inverted images, in a post processing step, taking advantage of a priori knowledge of the statistical properties of the restored images. This threshold allowed to enhance the image contrast and has contributed to eliminate both the residual noise and the pixels showing non-realistic values.
A transversal approach for patch-based label fusion via matrix completion
Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Thung, Kim-Han; Guo, Yanrong; Shen, Dinggang
2015-01-01
Recently, multi-atlas patch-based label fusion has received an increasing interest in the medical image segmentation field. After warping the anatomical labels from the atlas images to the target image by registration, label fusion is the key step to determine the latent label for each target image point. Two popular types of patch-based label fusion approaches are (1) reconstruction-based approaches that compute the target labels as a weighted average of atlas labels, where the weights are derived by reconstructing the target image patch using the atlas image patches; and (2) classification-based approaches that determine the target label as a mapping of the target image patch, where the mapping function is often learned using the atlas image patches and their corresponding labels. Both approaches have their advantages and limitations. In this paper, we propose a novel patch-based label fusion method to combine the above two types of approaches via matrix completion (and hence, we call it transversal). As we will show, our method overcomes the individual limitations of both reconstruction-based and classification-based approaches. Since the labeling confidences may vary across the target image points, we further propose a sequential labeling framework that first labels the highly confident points and then gradually labels more challenging points in an iterative manner, guided by the label information determined in the previous iterations. We demonstrate the performance of our novel label fusion method in segmenting the hippocampus in the ADNI dataset, subcortical and limbic structures in the LONI dataset, and mid-brain structures in the SATA dataset. We achieve more accurate segmentation results than both reconstruction-based and classification-based approaches. Our label fusion method is also ranked 1st in the online SATA Multi-Atlas Segmentation Challenge. PMID:26160394
Representation of photon limited data in emission tomography using origin ensembles
NASA Astrophysics Data System (ADS)
Sitek, A.
2008-06-01
Representation and reconstruction of data obtained by emission tomography scanners are challenging due to high noise levels in the data. Typically, images obtained using tomographic measurements are represented using grids. In this work, we define images as sets of origins of events detected during tomographic measurements; we call these origin ensembles (OEs). A state in the ensemble is characterized by a vector of 3N parameters Y, where the parameters are the coordinates of origins of detected events in a three-dimensional space and N is the number of detected events. The 3N-dimensional probability density function (PDF) for that ensemble is derived, and we present an algorithm for OE image estimation from tomographic measurements. A displayable image (e.g. grid based image) is derived from the OE formulation by calculating ensemble expectations based on the PDF using the Markov chain Monte Carlo method. The approach was applied to computer-simulated 3D list-mode positron emission tomography data. The reconstruction errors for a 10 000 000 event acquisition for simulated ranged from 0.1 to 34.8%, depending on object size and sampling density. The method was also applied to experimental data and the results of the OE method were consistent with those obtained by a standard maximum-likelihood approach. The method is a new approach to representation and reconstruction of data obtained by photon-limited emission tomography measurements.
Cloud Detection of Optical Satellite Images Using Support Vector Machine
NASA Astrophysics Data System (ADS)
Lee, Kuan-Yi; Lin, Chao-Hung
2016-06-01
Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.
Simultaneous Multi-band Detection of Low Surface Brightness Galaxies with Markovian Modeling
NASA Astrophysics Data System (ADS)
Vollmer, B.; Perret, B.; Petremand, M.; Lavigne, F.; Collet, Ch.; van Driel, W.; Bonnarel, F.; Louys, M.; Sabatini, S.; MacArthur, L. A.
2013-02-01
We present to the astronomical community an algorithm for the detection of low surface brightness (LSB) galaxies in images, called MARSIAA (MARkovian Software for Image Analysis in Astronomy), which is based on multi-scale Markovian modeling. MARSIAA can be applied simultaneously to different bands. It segments an image into a user-defined number of classes, according to their surface brightness and surroundings—typically, one or two classes contain the LSB structures. We have developed an algorithm, called DetectLSB, which allows the efficient identification of LSB galaxies from among the candidate sources selected by MARSIAA. The application of the method to two and three bands simultaneously was tested on simulated images. Based on our tests, we are confident that we can detect LSB galaxies down to a central surface brightness level of only 1.5 times the standard deviation from the mean pixel value in the image background. To assess the robustness of our method, the method was applied to a set of 18 B- and I-band images (covering 1.3 deg2 in total) of the Virgo Cluster to which Sabatini et al. previously applied a matched-filter dwarf LSB galaxy search algorithm. We have detected all 20 objects from the Sabatini et al. catalog which we could classify by eye as bona fide LSB galaxies. Our method has also detected four additional Virgo Cluster LSB galaxy candidates undetected by Sabatini et al. To further assess the completeness of the results of our method, both MARSIAA, SExtractor, and DetectLSB were applied to search for (1) mock Virgo LSB galaxies inserted into a set of deep Next Generation Virgo Survey (NGVS) gri-band subimages and (2) Virgo LSB galaxies identified by eye in a full set of NGVS square degree gri images. MARSIAA/DetectLSB recovered ~20% more mock LSB galaxies and ~40% more LSB galaxies identified by eye than SExtractor/DetectLSB. With a 90% fraction of false positives from an entirely unsupervised pipeline, a completeness of 90% is reached for sources with r e > 3'' at a mean surface brightness level of μg = 27.7 mag arcsec-2 and a central surface brightness of μ0 g = 26.7 mag arcsec-2. About 10% of the false positives are artifacts, the rest being background galaxies. We have found our proposed Markovian LSB galaxy detection method to be complementary to the application of matched filters and an optimized use of SExtractor, and to have the following advantages: it is scale free, can be applied simultaneously to several bands, and is well adapted for crowded regions on the sky. .
Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.
Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades
2015-01-01
DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA gel electrophoresis images, called GELect, which was written in Java and made available through the imageJ framework. With a novel automated image processing workflow, the tool can accurately segment lanes from a gel matrix, intelligently extract distorted and even doublet bands that are difficult to identify by existing image processing tools. Consequently, genotyping from DNA gel electrophoresis can be performed automatically allowing users to efficiently conduct large scale DNA fingerprinting via DNA gel electrophoresis. The software is freely available from http://www.biotec.or.th/gi/tools/gelect.
Forest biomass change estimated from height change in interferometric SAR height models.
Solberg, Svein; Næsset, Erik; Gobakken, Terje; Bollandsås, Ole-Martin
2014-12-01
There is a need for new satellite remote sensing methods for monitoring tropical forest carbon stocks. Advanced RADAR instruments on board satellites can contribute with novel methods. RADARs can see through clouds, and furthermore, by applying stereo RADAR imaging we can measure forest height and its changes. Such height changes are related to carbon stock changes in the biomass. We here apply data from the current Tandem-X satellite mission, where two RADAR equipped satellites go in close formation providing stereo imaging. We combine that with similar data acquired with one of the space shuttles in the year 2000, i.e. the so-called SRTM mission. We derive height information from a RADAR image pair using a method called interferometry. We demonstrate an approach for REDD based on interferometry data from a boreal forest in Norway. We fitted a model to the data where above-ground biomass in the forest increases with 15 t/ha for every m increase of the height of the RADAR echo. When the RADAR echo is at the ground the estimated biomass is zero, and when it is 20 m above the ground the estimated above-ground biomass is 300 t/ha. Using this model we obtained fairly accurate estimates of biomass changes from 2000 to 2011. For 200 m 2 plots we obtained an accuracy of 65 t/ha, which corresponds to 50% of the mean above-ground biomass value. We also demonstrate that this method can be applied without having accurate terrain heights and without having former in-situ biomass data, both of which are generally lacking in tropical countries. The gain in accuracy was marginal when we included such data in the estimation. Finally, we demonstrate that logging and other biomass changes can be accurately mapped. A biomass change map based on interferometry corresponded well to a very accurate map derived from repeated scanning with airborne laser. Satellite based, stereo imaging with advanced RADAR instruments appears to be a promising method for REDD. Interferometric processing of the RADAR data provides maps of forest height changes from which we can estimate temporal changes in biomass and carbon.
An Image Processing Approach to Linguistic Translation
NASA Astrophysics Data System (ADS)
Kubatur, Shruthi; Sreehari, Suhas; Hegde, Rajeshwari
2011-12-01
The art of translation is as old as written literature. Developments since the Industrial Revolution have influenced the practice of translation, nurturing schools, professional associations, and standard. In this paper, we propose a method of translation of typed Kannada text (taken as an image) into its equivalent English text. The National Instruments (NI) Vision Assistant (version 8.5) has been used for Optical character Recognition (OCR). We developed a new way of transliteration (which we call NIV transliteration) to simplify the training of characters. Also, we build a special type of dictionary for the purpose of translation.
PACS and teleradiology for on-call support of abdominal imaging
NASA Astrophysics Data System (ADS)
Horii, Steven C.; Garra, Brian S.; Mun, Seong K.; Zeman, Robert K.; Levine, Betty A.; Fielding, Robert
1991-07-01
One aspect of the Georgetown image management and communications system (IMACS or PACS) is a built-in capability to support teleradiology. Unlike many dedicated teleradiology systems, the support of this capability as a part of PACS means that any acquired images are remotely accessible, not just those specifically input for transmission. Over the past one and one-half years, two radiologists (SCH, BSG) in the abdominal imaging division of the department of radiology have been accumulating experience with teleradiology for on-call support of emergency abdominal imaging, chiefly in ultrasound. As of the time of this writing, use of the system during on-call (one of these attending radiologists primarily responsible) or back-up call (the attending responsible for the Fellow on primary call) has resulted in a marked reduction in the number of times one of them has to drive to the hospital at night or over the weekend. Approximately 80% of the time, use of the teleradiology system obviates having to go in to review a case. The remainder of the time, the radiologist has to perform a procedure (e.g., abscess drainage) or a scan (e.g., complex Doppler study) himself. This paper reviews the system used for teleradiology, how it is electronically and operationally integrated with the PACS, the clinical benefits and disadvantages of this use, and radiologist and referring physician acceptance.
Ultrafast image-based dynamic light scattering for nanoparticle sizing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Wu; Zhang, Jie; Liu, Lili
An ultrafast sizing method for nanoparticles is proposed, called as UIDLS (Ultrafast Image-based Dynamic Light Scattering). This method makes use of the intensity fluctuation of scattered light from nanoparticles in Brownian motion, which is similar to the conventional DLS method. The difference in the experimental system is that the scattered light by nanoparticles is received by an image sensor instead of a photomultiplier tube. A novel data processing algorithm is proposed to directly get correlation coefficient between two images at a certain time interval (from microseconds to milliseconds) by employing a two-dimensional image correlation algorithm. This coefficient has been provedmore » to be a monotonic function of the particle diameter. Samples of standard latex particles (79/100/352/482/948 nm) were measured for validation of the proposed method. The measurement accuracy of higher than 90% was found with standard deviations less than 3%. A sample of nanosilver particle with nominal size of 20 ± 2 nm and a sample of polymethyl methacrylate emulsion with unknown size were also tested using UIDLS method. The measured results were 23.2 ± 3.0 nm and 246.1 ± 6.3 nm, respectively, which is substantially consistent with the transmission electron microscope results. Since the time for acquisition of two successive images has been reduced to less than 1 ms and the data processing time in about 10 ms, the total measuring time can be dramatically reduced from hundreds seconds to tens of milliseconds, which provides the potential for real-time and in situ nanoparticle sizing.« less
Morphological rational multi-scale algorithm for color contrast enhancement
NASA Astrophysics Data System (ADS)
Peregrina-Barreto, Hayde; Terol-Villalobos, Iván R.
2010-01-01
Contrast enhancement main goal consists on improving the image visual appearance but also it is used for providing a transformed image in order to segment it. In mathematical morphology several works have been derived from the framework theory for contrast enhancement proposed by Meyer and Serra. However, when working with images with a wide range of scene brightness, as for example when strong highlights and deep shadows appear in the same image, the proposed morphological methods do not allow the enhancement. In this work, a rational multi-scale method, which uses a class of morphological connected filters called filters by reconstruction, is proposed. Granulometry is used by finding the more accurate scales for filters and with the aim of avoiding the use of other little significant scales. The CIE-u'v'Y' space was used to introduce our results since it takes into account the Weber's Law and by avoiding the creation of new colors it permits to modify the luminance values without affecting the hue. The luminance component ('Y) is enhanced separately using the proposed method, next it is used for enhancing the chromatic components (u', v') by means of the center of gravity law of color mixing.
Contour-Based Corner Detection and Classification by Using Mean Projection Transform
Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein
2014-01-01
Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images. PMID:24590354
Contour-based corner detection and classification by using mean projection transform.
Kahaki, Seyed Mostafa Mousavi; Nordin, Md Jan; Ashtari, Amir Hossein
2014-02-28
Image corner detection is a fundamental task in computer vision. Many applications require reliable detectors to accurately detect corner points, commonly achieved by using image contour information. The curvature definition is sensitive to local variation and edge aliasing, and available smoothing methods are not sufficient to address these problems properly. Hence, we propose Mean Projection Transform (MPT) as a corner classifier and parabolic fit approximation to form a robust detector. The first step is to extract corner candidates using MPT based on the integral properties of the local contours in both the horizontal and vertical directions. Then, an approximation of the parabolic fit is calculated to localize the candidate corner points. The proposed method presents fewer false-positive (FP) and false-negative (FN) points compared with recent standard corner detection techniques, especially in comparison with curvature scale space (CSS) methods. Moreover, a new evaluation metric, called accuracy of repeatability (AR), is introduced. AR combines repeatability and the localization error (Le) for finding the probability of correct detection in the target image. The output results exhibit better repeatability, localization, and AR for the detected points compared with the criteria in original and transformed images.
Shermeyer, Jacob S.; Haack, Barry N.
2015-01-01
Two forestry-change detection methods are described, compared, and contrasted for estimating deforestation and growth in threatened forests in southern Peru from 2000 to 2010. The methods used in this study rely on freely available data, including atmospherically corrected Landsat 5 Thematic Mapper and Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation continuous fields (VCF). The two methods include a conventional supervised signature extraction method and a unique self-calibrating method called MODIS VCF guided forest/nonforest (FNF) masking. The process chain for each of these methods includes a threshold classification of MODIS VCF, training data or signature extraction, signature evaluation, k-nearest neighbor classification, analyst-guided reclassification, and postclassification image differencing to generate forest change maps. Comparisons of all methods were based on an accuracy assessment using 500 validation pixels. Results of this accuracy assessment indicate that FNF masking had a 5% higher overall accuracy and was superior to conventional supervised classification when estimating forest change. Both methods succeeded in classifying persistently forested and nonforested areas, and both had limitations when classifying forest change.
Speckle imaging with the MAMA detector: Preliminary results
NASA Technical Reports Server (NTRS)
Horch, E.; Heanue, J. F.; Morgan, J. S.; Timothy, J. G.
1994-01-01
We report on the first successful speckle imaging studies using the Stanford University speckle interferometry system, an instrument that uses a multianode microchannel array (MAMA) detector as the imaging device. The method of producing high-resolution images is based on the analysis of so-called 'near-axis' bispectral subplanes and follows the work of Lohmann et al. (1983). In order to improve the signal-to-noise ratio in the bispectrum, the frame-oversampling technique of Nakajima et al. (1989) is also employed. We present speckle imaging results of binary stars and other objects from V magnitude 5.5 to 11, and the quality of these images is studied. While the Stanford system is capable of good speckle imaging results, it is limited by the overall quantum efficiency of the current MAMA detector (which is due to the response of the photocathode at visible wavelengths and other detector properties) and by channel saturation of the microchannel plate. Both affect the signal-to-noise ratio of the power spectrum and bispectrum.
Anima: Modular Workflow System for Comprehensive Image Data Analysis
Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa
2014-01-01
Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541
Towards nonionizing photoacoustic cystography
NASA Astrophysics Data System (ADS)
Kim, Chulhong; Jeon, Mansik; Wang, Lihong V.
2012-02-01
Normally, urine flows down from kidneys to bladders. Vesicoureteral reflux (VUR) is the abnormal flow of urine from bladders back to kidneys. VUR commonly follows urinary tract infection and leads to renal infection. Fluoroscopic voiding cystourethrography and direct radionuclide voiding cystography have been clinical gold standards for VUR imaging, but these methods are ionizing. Here, we demonstrate the feasibility of a novel and nonionizing process for VUR mapping in vivo, called photoacoustic cystography (PAC). Using a photoacoustic (PA) imaging system, we have successfully imaged a rat bladder filled with clinically being used methylene blue dye. An image contrast of ~8 was achieved. Further, spectroscopic PAC confirmed the accumulation of methylene blue in the bladder. Using a laser pulse energy of less than 1 mJ/cm2, bladder was clearly visible in the PA image. Our results suggest that this technology would be a useful clinical tool, allowing clinicians to identify bladder noninvasively in vivo.
Inferring Biological Structures from Super-Resolution Single Molecule Images Using Generative Models
Maji, Suvrajit; Bruchez, Marcel P.
2012-01-01
Localization-based super resolution imaging is presently limited by sampling requirements for dynamic measurements of biological structures. Generating an image requires serial acquisition of individual molecular positions at sufficient density to define a biological structure, increasing the acquisition time. Efficient analysis of biological structures from sparse localization data could substantially improve the dynamic imaging capabilities of these methods. Using a feature extraction technique called the Hough Transform simple biological structures are identified from both simulated and real localization data. We demonstrate that these generative models can efficiently infer biological structures in the data from far fewer localizations than are required for complete spatial sampling. Analysis at partial data densities revealed efficient recovery of clathrin vesicle size distributions and microtubule orientation angles with as little as 10% of the localization data. This approach significantly increases the temporal resolution for dynamic imaging and provides quantitatively useful biological information. PMID:22629348
Munn, Samson
2004-12-01
Avoidance of falsely positive results depends on distinguishing reality from artifact, in turn depending on images of highest quality. In radionuclide cardiac imaging, an inferior wall artifactual defect, so called "diaphragmatic attenuation", is particularly common and vexing. Despite the historically held view, analysis and review of the literature suggest the defect is likely not diaphragmatic but rather primarily due to attenuation by nearby stomach wall. The explanation is based on gravity and anatomy. With this improved understanding, effervescent granules were given as a clinical, nonresearch measure to nine patients during myocardial scanning. It was observed that two-thirds demonstrated moderate or marked lessening of attenuation. An additional benefit is lessening of artifact by extracardiac activity. These benefits may also apply to other sorts of cardiac radionuclide imaging. The significance of this new imaging method is discussed and various avenues of research are proposed.
Segmentation of cortical bone using fast level sets
NASA Astrophysics Data System (ADS)
Chowdhury, Manish; Jörgens, Daniel; Wang, Chunliang; Smedby, Årjan; Moreno, Rodrigo
2017-02-01
Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.
NASA Astrophysics Data System (ADS)
Kleinmann, Johanna; Wueller, Dietmar
2007-01-01
Since the signal to noise measuring method as standardized in the normative part of ISO 15739:2002(E)1 does not quantify noise in a way that matches the perception of the human eye, two alternative methods have been investigated which may be appropriate to quantify the noise perception in a physiological manner: - the model of visual noise measurement proposed by Hung et al2 (as described in the informative annex of ISO 15739:20021) which tries to simulate the process of human vision by using the opponent space and contrast sensitivity functions and uses the CIEL*u*v*1976 colour space for the determination of a so called visual noise value. - The S-CIELab model and CIEDE2000 colour difference proposed by Fairchild et al 3 which simulates human vision approximately the same way as Hung et al2 but uses an image comparison afterwards based on CIEDE2000. With a psychophysical experiment based on just noticeable difference (JND), threshold images could be defined, with which the two approaches mentioned above were tested. The assumption is that if the method is valid, the different threshold images should get the same 'noise value'. The visual noise measurement model results in similar visual noise values for all the threshold images. The method is reliable to quantify at least the JND for noise in uniform areas of digital images. While the visual noise measurement model can only evaluate uniform colour patches in images, the S-CIELab model can be used on images with spatial content as well. The S-CIELab model also results in similar colour difference values for the set of threshold images, but with some limitations: for images which contain spatial structures besides the noise, the colour difference varies depending on the contrast of the spatial content.
MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery
Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.
2016-01-01
Purpose Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation. PMID:27330239
MIND Demons for MR-to-CT deformable image registration in image-guided spine surgery
NASA Astrophysics Data System (ADS)
Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.
2016-03-01
Purpose: Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method: The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result: The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions: A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.
Achuthan, Anusha; Rajeswari, Mandava; Ramachandram, Dhanesh; Aziz, Mohd Ezane; Shuaib, Ibrahim Lutfi
2010-07-01
This paper introduces an approach to perform segmentation of regions in computed tomography (CT) images that exhibit intra-region intensity variations and at the same time have similar intensity distributions with surrounding/adjacent regions. In this work, we adapt a feature computed from wavelet transform called wavelet energy to represent the region information. The wavelet energy is embedded into a level set model to formulate the segmentation model called wavelet energy-guided level set-based active contour (WELSAC). The WELSAC model is evaluated using several synthetic and CT images focusing on tumour cases, which contain regions demonstrating the characteristics of intra-region intensity variations and having high similarity in intensity distributions with the adjacent regions. The obtained results show that the proposed WELSAC model is able to segment regions of interest in close correspondence with the manual delineation provided by the medical experts and to provide a solution for tumour detection. Copyright 2010 Elsevier Ltd. All rights reserved.
Seamless contiguity method for parallel segmentation of remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Geng; Wang, Guanghui; Yu, Mei; Cui, Chengling
2015-12-01
Seamless contiguity is the key technology for parallel segmentation of remote sensing data with large quantities. It can be effectively integrate fragments of the parallel processing into reasonable results for subsequent processes. There are numerous methods reported in the literature for seamless contiguity, such as establishing buffer, area boundary merging and data sewing. et. We proposed a new method which was also based on building buffers. The seamless contiguity processes we adopt are based on the principle: ensuring the accuracy of the boundary, ensuring the correctness of topology. Firstly, block number is computed based on data processing ability, unlike establishing buffer on both sides of block line, buffer is established just on the right side and underside of the line. Each block of data is segmented respectively and then gets the segmentation objects and their label value. Secondly, choose one block(called master block) and do stitching on the adjacent blocks(called slave block), process the rest of the block in sequence. Through the above processing, topological relationship and boundaries of master block are guaranteed. Thirdly, if the master block polygons boundaries intersect with buffer boundary and the slave blocks polygons boundaries intersect with block line, we adopt certain rules to merge and trade-offs them. Fourthly, check the topology and boundary in the buffer area. Finally, a set of experiments were conducted and prove the feasibility of this method. This novel seamless contiguity algorithm provides an applicable and practical solution for efficient segmentation of massive remote sensing image.
Fast Open-World Person Re-Identification.
Zhu, Xiatian; Wu, Botong; Huang, Dongcheng; Zheng, Wei-Shi
2018-05-01
Existing person re-identification (re-id) methods typically assume that: 1) any probe person is guaranteed to appear in the gallery target population during deployment (i.e., closed-world) and 2) the probe set contains only a limited number of people (i.e., small search scale). Both assumptions are artificial and breached in real-world applications, since the probe population in target people search can be extremely vast in practice due to the ambiguity of probe search space boundary. Therefore, it is unrealistic that any probe person is assumed as one target people, and a large-scale search in person images is inherently demanded. In this paper, we introduce a new person re-id search setting, called large scale open-world (LSOW) re-id, characterized by huge size probe images and open person population in search thus more close to practical deployments. Under LSOW, the under-studied problem of person re-id efficiency is essential in addition to that of commonly studied re-id accuracy. We, therefore, develop a novel fast person re-id method, called Cross-view Identity Correlation and vErification (X-ICE) hashing, for joint learning of cross-view identity representation binarisation and discrimination in a unified manner. Extensive comparative experiments on three large-scale benchmarks have been conducted to validate the superiority and advantages of the proposed X-ICE method over a wide range of the state-of-the-art hashing models, person re-id methods, and their combinations.
Lensless digital holography with diffuse illumination through a pseudo-random phase mask.
Bernet, Stefan; Harm, Walter; Jesacher, Alexander; Ritsch-Marte, Monika
2011-12-05
Microscopic imaging with a setup consisting of a pseudo-random phase mask, and an open CMOS camera, without an imaging objective, is demonstrated. The pseudo random phase mask acts as a diffuser for an incoming laser beam, scattering a speckle pattern to a CMOS chip, which is recorded once as a reference. A sample which is afterwards inserted somewhere in the optical beam path changes the speckle pattern. A single (non-iterative) image processing step, comparing the modified speckle pattern with the previously recorded one, generates a sharp image of the sample. After a first calibration the method works in real-time and allows quantitative imaging of complex (amplitude and phase) samples in an extended three-dimensional volume. Since no lenses are used, the method is free from lens abberations. Compared to standard inline holography the diffuse sample illumination improves the axial sectioning capability by increasing the effective numerical aperture in the illumination path, and it suppresses the undesired so-called twin images. For demonstration, a high resolution spatial light modulator (SLM) is programmed to act as the pseudo-random phase mask. We show experimental results, imaging microscopic biological samples, e.g. insects, within an extended volume at a distance of 15 cm with a transverse and longitudinal resolution of about 60 μm and 400 μm, respectively.
Improved In vivo Assessment of Pulmonary Fibrosis in Mice using X-Ray Dark-Field Radiography
NASA Astrophysics Data System (ADS)
Yaroshenko, Andre; Hellbach, Katharina; Yildirim, Ali Önder; Conlon, Thomas M.; Fernandez, Isis Enlil; Bech, Martin; Velroyen, Astrid; Meinel, Felix G.; Auweter, Sigrid; Reiser, Maximilian; Eickelberg, Oliver; Pfeiffer, Franz
2015-12-01
Idiopathic pulmonary fibrosis (IPF) is a chronic and progressive lung disease with a median life expectancy of 4-5 years after initial diagnosis. Early diagnosis and accurate monitoring of IPF are limited by a lack of sensitive imaging techniques that are able to visualize early fibrotic changes at the epithelial-mesenchymal interface. Here, we report a new x-ray imaging approach that directly visualizes the air-tissue interfaces in mice in vivo. This imaging method is based on the detection of small-angle x-ray scattering that occurs at the air-tissue interfaces in the lung. Small-angle scattering is detected with a Talbot-Lau interferometer, which provides the so-called x-ray dark-field signal. Using this imaging modality, we demonstrate-for the first time-the quantification of early pathogenic changes and their correlation with histological changes, as assessed by stereological morphometry. The presented radiography method is significantly more sensitive in detecting morphological changes compared with conventional x-ray imaging, and exhibits a significantly lower radiation dose than conventional x-ray CT. As a result of the improved imaging sensitivity, this new imaging modality could be used in future to reduce the number of animals required for pulmonary research studies.
Total variation-based method for radar coincidence imaging with model mismatch for extended target
NASA Astrophysics Data System (ADS)
Cao, Kaicheng; Zhou, Xiaoli; Cheng, Yongqiang; Fan, Bo; Qin, Yuliang
2017-11-01
Originating from traditional optical coincidence imaging, radar coincidence imaging (RCI) is a staring/forward-looking imaging technique. In RCI, the reference matrix must be computed precisely to reconstruct the image as preferred; unfortunately, such precision is almost impossible due to the existence of model mismatch in practical applications. Although some conventional sparse recovery algorithms are proposed to solve the model-mismatch problem, they are inapplicable to nonsparse targets. We therefore sought to derive the signal model of RCI with model mismatch by replacing the sparsity constraint item with total variation (TV) regularization in the sparse total least squares optimization problem; in this manner, we obtain the objective function of RCI with model mismatch for an extended target. A more robust and efficient algorithm called TV-TLS is proposed, in which the objective function is divided into two parts and the perturbation matrix and scattering coefficients are updated alternately. Moreover, due to the ability of TV regularization to recover sparse signal or image with sparse gradient, TV-TLS method is also applicable to sparse recovering. Results of numerical experiments demonstrate that, for uniform extended targets, sparse targets, and real extended targets, the algorithm can achieve preferred imaging performance both in suppressing noise and in adapting to model mismatch.
Rand, Danielle; Derdak, Zoltan; Carlson, Rolf; ...
2015-10-29
Hepatocellular carcinoma (HCC) is one of the most common malignant tumors worldwide and is almost uniformly fatal. Current methods of detection include ultrasound examination and imaging by CT scan or MRI; however, these techniques are problematic in terms of sensitivity and specificity, and the detection of early tumors (<1 cm diameter) has proven elusive. Better, more specific, and more sensitive detection methods are therefore urgently needed. Here we discuss the application of a newly developed x-ray imaging technique called Spatial Frequency Heterodyne Imaging (SFHI) for the early detection of HCC. SFHI uses x-rays scattered by an object to form anmore » image and is more sensitive than conventional absorption-based x-radiography. We show that tissues labeled in vivo with gold nanoparticle contrast agents can be detected using SFHI. We also demonstrate that directed targeting and SFHI of HCC tumors in a mouse model is possible through the use of HCC-specific antibodies. As a result, the enhanced sensitivity of SFHI relative to currently available techniques enables the x-ray imaging of tumors that are just a few millimeters in diameter and substantially reduces the amount of nanoparticle contrast agent required for intravenous injection relative to absorption-based x-ray imaging.« less
A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming
2018-06-01
This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.
Efficient Kriging via Fast Matrix-Vector Products
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Raykar, Vikas C.; Duraiswami, Ramani; Mount, David M.
2008-01-01
Interpolating scattered data points is a problem of wide ranging interest. Ordinary kriging is an optimal scattered data estimator, widely used in geosciences and remote sensing. A generalized version of this technique, called cokriging, can be used for image fusion of remotely sensed data. However, it is computationally very expensive for large data sets. We demonstrate the time efficiency and accuracy of approximating ordinary kriging through the use of fast matrixvector products combined with iterative methods. We used methods based on the fast Multipole methods and nearest neighbor searching techniques for implementations of the fast matrix-vector products.
Deep classification hashing for person re-identification
NASA Astrophysics Data System (ADS)
Wang, Jiabao; Li, Yang; Zhang, Xiancai; Miao, Zhuang; Tao, Gang
2018-04-01
As the development of surveillance in public, person re-identification becomes more and more important. The largescale databases call for efficient computation and storage, hashing technique is one of the most important methods. In this paper, we proposed a new deep classification hashing network by introducing a new binary appropriation layer in the traditional ImageNet pre-trained CNN models. It outputs binary appropriate features, which can be easily quantized into binary hash-codes for hamming similarity comparison. Experiments show that our deep hashing method can outperform the state-of-the-art methods on the public CUHK03 and Market1501 datasets.
NASA Astrophysics Data System (ADS)
Laoufi, Fatiha; Belbachir, Ahmed-Hafid; Benabadji, Noureddine; Zanoun, Abdelouahab
2011-10-01
We have mapped the region of Oran, Algeria, using multispectral remote sensing with different resolutions. For the identification of objects on the ground using their spectral signatures, two methods were applied to images from SPOT, LANDSAT, IRS-1 C and ASTER. The first one is called Base Rule method (BR method) and is based on a set of rules that must be met at each pixel in the different bands reflectance calibrated and henceforth it is assigned to a given class. The construction of these rules is based on the spectral profiles of popular classes in the scene studied. The second one is called Spectral Angle Mapper method (SAM method) and is based on the direct calculation of the spectral angle between the target vector representing the spectral profile of the desired class and the pixel vector whose components are numbered accounts in the different bands of the calibrated image reflectance. This new method was performed using PCSATWIN software developed by our own laboratory LAAR. After collecting a library of spectral signatures with multiple libraries, a detailed study of the principles and physical processes that can influence the spectral signature has been conducted. The final goal is to establish the range of variation of a spectral profile of a well-defined class and therefore to get precise bases for spectral rules. From the results we have obtained, we find that the supervised classification of these pixels by BR method derived from spectral signatures reduces the uncertainty associated with identifying objects by enhancing significantly the percentage of correct classification with very distinct classes.
Forensic imaging tools for law enforcement
DOE Office of Scientific and Technical Information (OSTI.GOV)
SMITHPETER,COLIN L.; SANDISON,DAVID R.; VARGO,TIMOTHY D.
2000-01-01
Conventional methods of gathering forensic evidence at crime scenes are encumbered by difficulties that limit local law enforcement efforts to apprehend offenders and bring them to justice. Working with a local law-enforcement agency, Sandia National Laboratories has developed a prototype multispectral imaging system that can speed up the investigative search task and provide additional and more accurate evidence. The system, called the Criminalistics Light-imaging Unit (CLU), has demonstrated the capabilities of locating fluorescing evidence at crime scenes under normal lighting conditions and of imaging other types of evidence, such as untreated fingerprints, by direct white-light reflectance. CLU employs state ofmore » the art technology that provides for viewing and recording of the entire search process on videotape. This report describes the work performed by Sandia to design, build, evaluate, and commercialize CLU.« less
Adaptive image coding based on cubic-spline interpolation
NASA Astrophysics Data System (ADS)
Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien
2014-09-01
It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.
Reaungamornrat, Sureerat; De Silva, Tharindu; Uneri, Ali; Vogt, Sebastian; Kleinszig, Gerhard; Khanna, Akhil J; Wolinsky, Jean-Paul; Prince, Jerry L; Siewerdsen, Jeffrey H
2016-11-01
Intraoperative localization of target anatomy and critical structures defined in preoperative MR/CT images can be achieved through the use of multimodality deformable registration. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality-independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, finds a deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the integrated velocity fields, a modality-insensitive similarity function suitable to multimodality images, and smoothness on the diffeomorphisms themselves. Direct optimization without relying on the exponential map and stationary velocity field approximation used in conventional diffeomorphic Demons is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, normalized MI (NMI) Demons, and MIND with a diffusion-based registration method (MIND-elastic). The method yielded sub-voxel invertibility (0.008 mm) and nonzero-positive Jacobian determinants. It also showed improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.7 mm compared to 11.3, 3.1, 5.6, and 2.4 mm for MI FFD, LMI FFD, NMI Demons, and MIND-elastic methods, respectively. Validation in clinical studies demonstrated realistic deformations with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine.
Reaungamornrat, Sureerat; De Silva, Tharindu; Uneri, Ali; Vogt, Sebastian; Kleinszig, Gerhard; Khanna, Akhil J; Wolinsky, Jean-Paul; Prince, Jerry L.
2016-01-01
Intraoperative localization of target anatomy and critical structures defined in preoperative MR/CT images can be achieved through the use of multimodality deformable registration. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality-independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, finds a deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the integrated velocity fields, a modality-insensitive similarity function suitable to multimodality images, and smoothness on the diffeomorphisms themselves. Direct optimization without relying on the exponential map and stationary velocity field approximation used in conventional diffeomorphic Demons is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, normalized MI (NMI) Demons, and MIND with a diffusion-based registration method (MIND-elastic). The method yielded sub-voxel invertibility (0.008 mm) and nonzero-positive Jacobian determinants. It also showed improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.7 mm compared to 11.3, 3.1, 5.6, and 2.4 mm for MI FFD, LMI FFD, NMI Demons, and MIND-elastic methods, respectively. Validation in clinical studies demonstrated realistic deformations with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. PMID:27295656
In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images.
Christiansen, Eric M; Yang, Samuel J; Ando, D Michael; Javaherian, Ashkan; Skibinski, Gaia; Lipnick, Scott; Mount, Elliot; O'Neil, Alison; Shah, Kevan; Lee, Alicia K; Goyal, Piyush; Fedus, William; Poplin, Ryan; Esteva, Andre; Berndl, Marc; Rubin, Lee L; Nelson, Philip; Finkbeiner, Steven
2018-04-19
Microscopy is a central method in life sciences. Many popular methods, such as antibody labeling, are used to add physical fluorescent labels to specific cellular constituents. However, these approaches have significant drawbacks, including inconsistency; limitations in the number of simultaneous labels because of spectral overlap; and necessary perturbations of the experiment, such as fixing the cells, to generate the measurement. Here, we show that a computational machine-learning approach, which we call "in silico labeling" (ISL), reliably predicts some fluorescent labels from transmitted-light images of unlabeled fixed or live biological samples. ISL predicts a range of labels, such as those for nuclei, cell type (e.g., neural), and cell state (e.g., cell death). Because prediction happens in silico, the method is consistent, is not limited by spectral overlap, and does not disturb the experiment. ISL generates biological measurements that would otherwise be problematic or impossible to acquire. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Iwasaki, Ryosuke; Takagi, Ryo; Tomiyasu, Kentaro; Yoshizawa, Shin; Umemura, Shin-ichiro
2017-07-01
The targeting of the ultrasound beam and the prediction of thermal lesion formation in advance are the requirements for monitoring high-intensity focused ultrasound (HIFU) treatment with safety and reproducibility. To visualize the HIFU focal zone, we utilized an acoustic radiation force impulse (ARFI) imaging-based method. After inducing displacements inside tissues with pulsed HIFU called the push pulse exposure, the distribution of axial displacements started expanding and moving. To acquire RF data immediately after and during the HIFU push pulse exposure to improve prediction accuracy, we attempted methods using extrapolation estimation and applying HIFU noise elimination. The distributions going back in the time domain from the end of push pulse exposure are in good agreement with tissue coagulation at the center. The results suggest that the proposed focal zone visualization employing pulsed HIFU entailing the high-speed ARFI imaging method is useful for the prediction of thermal coagulation in advance.
Kokaly, R.F.; King, T.V.V.; Hoefen, T.M.
2011-01-01
Identifying materials by measuring and analyzing their reflectance spectra has been an important method in analytical chemistry for decades. Airborne and space-based imaging spectrometers allow scientists to detect materials and map their distributions across the landscape. With new satellite-borne hyperspectral sensors planned for the future, for example, HYSPIRI (HYPerspectral InfraRed Imager), robust methods are needed to fully exploit the information content of hyperspectral remote sensing data. A method of identifying and mapping materials using spectral-feature based analysis of reflectance data in an expert-system framework called MICA (Material Identification and Characterization Algorithm) is described in this paper. The core concepts and calculations of MICA are presented. A MICA command file has been developed and applied to map minerals in the full-country coverage of the 2007 Afghanistan HyMap hyperspectral data. ?? 2011 IEEE.
FogBank: a single cell segmentation across multiple cell lines and image modalities.
Chalfoun, Joe; Majurski, Michael; Dima, Alden; Stuelten, Christina; Peskin, Adele; Brady, Mary
2014-12-30
Many cell lines currently used in medical research, such as cancer cells or stem cells, grow in confluent sheets or colonies. The biology of individual cells provide valuable information, thus the separation of touching cells in these microscopy images is critical for counting, identification and measurement of individual cells. Over-segmentation of single cells continues to be a major problem for methods based on morphological watershed due to the high level of noise in microscopy cell images. There is a need for a new segmentation method that is robust over a wide variety of biological images and can accurately separate individual cells even in challenging datasets such as confluent sheets or colonies. We present a new automated segmentation method called FogBank that accurately separates cells when confluent and touching each other. This technique is successfully applied to phase contrast, bright field, fluorescence microscopy and binary images. The method is based on morphological watershed principles with two new features to improve accuracy and minimize over-segmentation. First, FogBank uses histogram binning to quantize pixel intensities which minimizes the image noise that causes over-segmentation. Second, FogBank uses a geodesic distance mask derived from raw images to detect the shapes of individual cells, in contrast to the more linear cell edges that other watershed-like algorithms produce. We evaluated the segmentation accuracy against manually segmented datasets using two metrics. FogBank achieved segmentation accuracy on the order of 0.75 (1 being a perfect match). We compared our method with other available segmentation techniques in term of achieved performance over the reference data sets. FogBank outperformed all related algorithms. The accuracy has also been visually verified on data sets with 14 cell lines across 3 imaging modalities leading to 876 segmentation evaluation images. FogBank produces single cell segmentation from confluent cell sheets with high accuracy. It can be applied to microscopy images of multiple cell lines and a variety of imaging modalities. The code for the segmentation method is available as open-source and includes a Graphical User Interface for user friendly execution.
NASA Technical Reports Server (NTRS)
2003-01-01
With NASA on its side, Positive Systems, Inc., of Whitefish, Montana, is veering away from the industry standards defined for producing and processing remotely sensed images. A top developer of imaging products for geographic information system (GIS) and computer-aided design (CAD) applications, Positive Systems is bucking traditional imaging concepts with a cost-effective and time-saving software tool called Digital Images Made Easy (DIME(trademark)). Like piecing a jigsaw puzzle together, DIME can integrate a series of raw aerial or satellite snapshots into a single, seamless panoramic image, known as a 'mosaic.' The 'mosaicked' images serve as useful backdrops to GIS maps - which typically consist of line drawings called 'vectors' - by allowing users to view a multidimensional map that provides substantially more geographic information.
Hands-on guide for 3D image creation for geological purposes
NASA Astrophysics Data System (ADS)
Frehner, Marcel; Tisato, Nicola
2013-04-01
Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red-cyan anaglyphs is their simplicity and the possibility to print them on normal paper or project them using a conventional projector. Producing 3D stereoscopic images is much easier than commonly thought. Our hands-on poster provides an easy-to-use guide for producing 3D stereoscopic images. Few simple rules-of-thumb are presented that define how photographs of any scene or object have to be shot to produce good-looking 3D images. We use the free software Stereophotomaker (http://stereo.jpn.org/eng/stphmkr) to produce anaglyphs and provide red-cyan 3D glasses for viewing them. Our hands-on poster is easy to adapt and helps any geologist to present his/her field or hand specimen photographs in a much more fashionable 3D way for future publications or conference posters.
Using color histogram normalization for recovering chromatic illumination-changed images.
Pei, S C; Tseng, C L; Wu, C C
2001-11-01
We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.
NASA Astrophysics Data System (ADS)
Yang, Gongping; Zhou, Guang-Tong; Yin, Yilong; Yang, Xiukun
2010-12-01
A critical step in an automatic fingerprint recognition system is the segmentation of fingerprint images. Existing methods are usually designed to segment fingerprint images originated from a certain sensor. Thus their performances are significantly affected when dealing with fingerprints collected by different sensors. This work studies the sensor interoperability of fingerprint segmentation algorithms, which refers to the algorithm's ability to adapt to the raw fingerprints obtained from different sensors. We empirically analyze the sensor interoperability problem, and effectively address the issue by proposing a [InlineEquation not available: see fulltext.]-means based segmentation method called SKI. SKI clusters foreground and background blocks of a fingerprint image based on the [InlineEquation not available: see fulltext.]-means algorithm, where a fingerprint block is represented by a 3-dimensional feature vector consisting of block-wise coherence, mean, and variance (abbreviated as CMV). SKI also employs morphological postprocessing to achieve favorable segmentation results. We perform SKI on each fingerprint to ensure sensor interoperability. The interoperability and robustness of our method are validated by experiments performed on a number of fingerprint databases which are obtained from various sensors.
Corum, Curtis A; Idiyatullin, Djaudat; Snyder, Carl J; Garwood, Michael
2015-02-01
SWIFT (SWeep Imaging with Fourier Transformation) is a non-Cartesian MRI method with unique features and capabilities. In SWIFT, radiofrequency (RF) excitation and reception are performed nearly simultaneously, by rapidly switching between transmit and receive during a frequency-swept RF pulse. Because both the transmitted pulse and data acquisition are simultaneously amplitude-modulated in SWIFT (in contrast to continuous RF excitation and uninterrupted data acquisition in more familiar MRI sequences), crosstalk between different frequency bands occurs in the data. This crosstalk leads to a "bulls-eye" artifact in SWIFT images. We present a method to cancel this interband crosstalk by cycling the pulse and receive gap positions relative to the un-gapped pulse shape. We call this strategy "gap cycling." We carry out theoretical analysis, simulation and experiments to characterize the signal chain, resulting artifacts, and their elimination for SWIFT. Theoretical analysis reveals the mechanism for gap-cycling's effectiveness in canceling interband crosstalk in the received data. We show phantom and in vivo results demonstrating bulls-eye artifact free images. Gap cycling is an effective method to remove bulls-eye artifact resulting from interband crosstalk in SWIFT data. © 2014 Wiley Periodicals, Inc.
Integration of heterogeneous features for remote sensing scene classification
NASA Astrophysics Data System (ADS)
Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang
2018-01-01
Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.
MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery.
Reaungamornrat, S; De Silva, T; Uneri, A; Wolinsky, J-P; Khanna, A J; Kleinszig, G; Vogt, S; Prince, J L; Siewerdsen, J H
2016-02-27
Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.
Design of Multishell Sampling Schemes with Uniform Coverage in Diffusion MRI
Caruyer, Emmanuel; Lenglet, Christophe; Sapiro, Guillermo; Deriche, Rachid
2017-01-01
Purpose In diffusion MRI, a technique known as diffusion spectrum imaging reconstructs the propagator with a discrete Fourier transform, from a Cartesian sampling of the diffusion signal. Alternatively, it is possible to directly reconstruct the orientation distribution function in q-ball imaging, providing so-called high angular resolution diffusion imaging. In between these two techniques, acquisitions on several spheres in q-space offer an interesting trade-off between the angular resolution and the radial information gathered in diffusion MRI. A careful design is central in the success of multishell acquisition and reconstruction techniques. Methods The design of acquisition in multishell is still an open and active field of research, however. In this work, we provide a general method to design multishell acquisition with uniform angular coverage. This method is based on a generalization of electrostatic repulsion to multishell. Results We evaluate the impact of our method using simulations, on the angular resolution in one and two bundles of fiber configurations. Compared to more commonly used radial sampling, we show that our method improves the angular resolution, as well as fiber crossing discrimination. Discussion We propose a novel method to design sampling schemes with optimal angular coverage and show the positive impact on angular resolution in diffusion MRI. PMID:23625329
Nanoscale imaging of clinical specimens using pathology-optimized expansion microscopy
Zhao, Yongxin; Bucur, Octavian; Irshad, Humayun; Chen, Fei; Weins, Astrid; Stancu, Andreea L.; Oh, Eun-Young; DiStasio, Marcello; Torous, Vanda; Glass, Benjamin; Stillman, Isaac E.; Schnitt, Stuart J.; Beck, Andrew H.; Boyden, Edward S.
2017-01-01
Expansion microscopy (ExM), a method for improving the resolution of light microscopy by physically expanding the specimen, has not been applied to clinical tissue samples. Here we report a clinically optimized form of ExM that supports nanoscale imaging of human tissue specimens that have been fixed with formalin, embedded in paraffin, stained with hematoxylin and eosin (H&E), and/or fresh frozen. The method, which we call expansion pathology (ExPath), converts clinical samples into an ExM-compatible state, then applies an ExM protocol with protein anchoring and mechanical homogenization steps optimized for clinical samples. ExPath enables ~70 nm resolution imaging of diverse biomolecules in intact tissues using conventional diffraction-limited microscopes, and standard antibody and fluorescent DNA in situ hybridization reagents. We use ExPath for optical diagnosis of kidney minimal-change disease, which previously required electron microscopy (EM), and demonstrate high-fidelity computational discrimination between early breast neoplastic lesions that to date have challenged human judgment. ExPath may enable the routine use of nanoscale imaging in pathology and clinical research. PMID:28714966
NASA Astrophysics Data System (ADS)
You, Wonsang; Andescavage, Nickie; Zun, Zungho; Limperopoulos, Catherine
2017-03-01
Intravoxel incoherent motion (IVIM) magnetic resonance imaging is an emerging non-invasive technique that has been recently applied to quantify in vivo global placental perfusion. We propose a robust semi-automated method for segmenting the placenta into fetal and maternal compartments from IVIM data, using a multi-label image segmentation algorithm called `GrowCut'. Placental IVIM data were acquired on a 1.5T scanner from 16 healthy pregnant women between 21-37 gestational weeks. The voxel-wise perfusion fraction was then estimated after non-rigid image registration. The seed regions of the fetal and maternal compartments were determined using structural T2-weighted reference images, and improved progressively through an iterative process of the GrowCut algorithm to accurately encompass fetal and maternal compartments. We demonstrated that the placental perfusion fraction decreased in both fetal (-0.010/week) and maternal compartments (-0.013/week) while their relative difference (ffetal-fmaternal) gradually increased with advancing gestational age (+0.003/week, p=0.065). Our preliminary results show that the proposed method was effective in distinguishing placental compartments using IVIM.
Three-dimensional focus of attention for iterative cone-beam micro-CT reconstruction
NASA Astrophysics Data System (ADS)
Benson, T. M.; Gregor, J.
2006-09-01
Three-dimensional iterative reconstruction of high-resolution, circular orbit cone-beam x-ray CT data is often considered impractical due to the demand for vast amounts of computer cycles and associated memory. In this paper, we show that the computational burden can be reduced by limiting the reconstruction to a small, well-defined portion of the image volume. We first discuss using the support region defined by the set of voxels covered by all of the projection views. We then present a data-driven preprocessing technique called focus of attention that heuristically separates both image and projection data into object and background before reconstruction, thereby further reducing the reconstruction region of interest. We present experimental results for both methods based on mouse data and a parallelized implementation of the SIRT algorithm. The computational savings associated with the support region are substantial. However, the results for focus of attention are even more impressive in that only about one quarter of the computer cycles and memory are needed compared with reconstruction of the entire image volume. The image quality is not compromised by either method.
Nanoscale imaging of clinical specimens using pathology-optimized expansion microscopy.
Zhao, Yongxin; Bucur, Octavian; Irshad, Humayun; Chen, Fei; Weins, Astrid; Stancu, Andreea L; Oh, Eun-Young; DiStasio, Marcello; Torous, Vanda; Glass, Benjamin; Stillman, Isaac E; Schnitt, Stuart J; Beck, Andrew H; Boyden, Edward S
2017-08-01
Expansion microscopy (ExM), a method for improving the resolution of light microscopy by physically expanding a specimen, has not been applied to clinical tissue samples. Here we report a clinically optimized form of ExM that supports nanoscale imaging of human tissue specimens that have been fixed with formalin, embedded in paraffin, stained with hematoxylin and eosin, and/or fresh frozen. The method, which we call expansion pathology (ExPath), converts clinical samples into an ExM-compatible state, then applies an ExM protocol with protein anchoring and mechanical homogenization steps optimized for clinical samples. ExPath enables ∼70-nm-resolution imaging of diverse biomolecules in intact tissues using conventional diffraction-limited microscopes and standard antibody and fluorescent DNA in situ hybridization reagents. We use ExPath for optical diagnosis of kidney minimal-change disease, a process that previously required electron microscopy, and we demonstrate high-fidelity computational discrimination between early breast neoplastic lesions for which pathologists often disagree in classification. ExPath may enable the routine use of nanoscale imaging in pathology and clinical research.
A novel image-based quantitative method for the characterization of NETosis
Zhao, Wenpu; Fogg, Darin K.; Kaplan, Mariana J.
2015-01-01
NETosis is a newly recognized mechanism of programmed neutrophil death. It is characterized by a stepwise progression of chromatin decondensation, membrane rupture, and release of bactericidal DNA-based structures called neutrophil extracellular traps (NETs). Conventional ‘suicidal’ NETosis has been described in pathogenic models of systemic autoimmune disorders. Recent in vivo studies suggest that a process of ‘vital’ NETosis also exists, in which chromatin is condensed and membrane integrity is preserved. Techniques to assess ‘suicidal’ or ‘vital’ NET formation in a specific, quantitative, rapid and semiautomated way have been lacking, hindering the characterization of this process. Here we have developed a new method to simultaneously assess both ‘suicidal’ and ‘vital’ NETosis, using high-speed multi-spectral imaging coupled to morphometric image analysis, to quantify spontaneous NET formation observed ex-vivo or stimulus-induced NET formation triggered in vitro. Use of imaging flow cytometry allows automated, quantitative and rapid analysis of subcellular morphology and texture, and introduces the potential for further investigation using NETosis as a biomarker in pre-clinical and clinical studies. PMID:26003624
NASA Astrophysics Data System (ADS)
Hama, Hiromitsu; Yamashita, Kazumi
1991-11-01
A new method for video signal processing is described in this paper. The purpose is real-time image transformations at low cost, low power, and small size hardware. This is impossible without special hardware. Here generalized digital differential analyzer (DDA) and control memory (CM) play a very important role. Then indentation, which is called jaggy, is caused on the boundary of a background and a foreground accompanied with the processing. Jaggy does not occur inside the transformed image because of adopting linear interpretation. But it does occur inherently on the boundary of the background and the transformed images. It causes deterioration of image quality, and must be avoided. There are two well-know ways to improve image quality, blurring and supersampling. The former does not have much effect, and the latter has the much higher cost of computing. As a means of settling such a trouble, a method is proposed, which searches for positions that may arise jaggy and smooths such points. Computer simulations based on the real data from VTR, one scene of a movie, are presented to demonstrate our proposed scheme using DDA and CMs and to confirm the effectiveness on various transformations.
Zhang, Yu Shrike; Chang, Jae-Byum; Alvarez, Mario Moisés; Trujillo-de Santiago, Grissel; Aleman, Julio; Batzaya, Byambaa; Krishnadoss, Vaishali; Ramanujam, Aishwarya Aravamudhan; Kazemzadeh-Narbat, Mehdi; Chen, Fei; Tillberg, Paul W; Dokmeci, Mehmet Remzi; Boyden, Edward S; Khademhosseini, Ali
2016-03-15
To date, much effort has been expended on making high-performance microscopes through better instrumentation. Recently, it was discovered that physical magnification of specimens was possible, through a technique called expansion microscopy (ExM), raising the question of whether physical magnification, coupled to inexpensive optics, could together match the performance of high-end optical equipment, at a tiny fraction of the price. Here we show that such "hybrid microscopy" methods--combining physical and optical magnifications--can indeed achieve high performance at low cost. By physically magnifying objects, then imaging them on cheap miniature fluorescence microscopes ("mini-microscopes"), it is possible to image at a resolution comparable to that previously attainable only with benchtop microscopes that present costs orders of magnitude higher. We believe that this unprecedented hybrid technology that combines expansion microscopy, based on physical magnification, and mini-microscopy, relying on conventional optics--a process we refer to as Expansion Mini-Microscopy (ExMM)--is a highly promising alternative method for performing cost-effective, high-resolution imaging of biological samples. With further advancement of the technology, we believe that ExMM will find widespread applications for high-resolution imaging particularly in research and healthcare scenarios in undeveloped countries or remote places.
Theory and Application of Image Enhancement
1994-02-01
for collecting RGB data and displaying image 11205 beadbits - 0: updown m 0 11210 rowx - 3: columnx m 22: vidthlx - 58: depthx - 6: forex - 15: backx...1 11220 VIEW PRINT 2 TO 24 11230 CALL box(rowx, columnx, widthlx, depthx, forex , backx) 11240 LOCATE rowx + 1, columnx + 1: INPUT ’Type Image...Filename " f$ 11242 IF f$ - " THEN forex - 0: backx - 0 11244 IF £0 - " THEN CALL box(rowx, columnx, widthlx, depthx, forex , backx) 11246 IF f$ - THEN
Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop
NASA Astrophysics Data System (ADS)
Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.
2018-04-01
The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.
NASA Astrophysics Data System (ADS)
Wiebe, S.; Rhoades, G.; Wei, Z.; Rosenberg, A.; Belev, G.; Chapman, D.
2013-05-01
Refraction x-ray contrast is an imaging modality used primarily in a research setting at synchrotron facilities, which have a biomedical imaging research program. The most common method for exploiting refraction contrast is by using a technique called Diffraction Enhanced Imaging (DEI). The DEI apparatus allows the detection of refraction between two materials and produces a unique ''edge enhanced'' contrast appearance, very different from the traditional absorption x-ray imaging used in clinical radiology. In this paper we aim to explain the features of x-ray refraction contrast as a typical clinical radiologist would understand. Then a discussion regarding what needs to be considered in the interpretation of the refraction image takes place. Finally we present a discussion about the limitations of planar refraction imaging and the potential of DEI Computed Tomography. This is an original work that has not been submitted to any other source for publication. The authors have no commercial interests or conflicts of interest to disclose.
Design and evaluation of web-based image transmission and display with different protocols
NASA Astrophysics Data System (ADS)
Tan, Bin; Chen, Kuangyi; Zheng, Xichuan; Zhang, Jianguo
2011-03-01
There are many Web-based image accessing technologies used in medical imaging area, such as component-based (ActiveX Control) thick client Web display, Zerofootprint thin client Web viewer (or called server side processing Web viewer), Flash Rich Internet Application(RIA) ,or HTML5 based Web display. Different Web display methods have different peformance in different network environment. In this presenation, we give an evaluation on two developed Web based image display systems. The first one is used for thin client Web display. It works between a PACS Web server with WADO interface and thin client. The PACS Web server provides JPEG format images to HTML pages. The second one is for thick client Web display. It works between a PACS Web server with WADO interface and thick client running in browsers containing ActiveX control, Flash RIA program or HTML5 scripts. The PACS Web server provides native DICOM format images or JPIP stream for theses clients.
A survey of medical image registration - under review.
Viergever, Max A; Maintz, J B Antoine; Klein, Stefan; Murphy, Keelin; Staring, Marius; Pluim, Josien P W
2016-10-01
A retrospective view on the past two decades of the field of medical image registration is presented, guided by the article "A survey of medical image registration" (Maintz and Viergever, 1998). It shows that the classification of the field introduced in that article is still usable, although some modifications to do justice to advances in the field would be due. The main changes over the last twenty years are the shift from extrinsic to intrinsic registration, the primacy of intensity-based registration, the breakthrough of nonlinear registration, the progress of inter-subject registration, and the availability of generic image registration software packages. Two problems that were called urgent already 20 years ago, are even more urgent nowadays: Validation of registration methods, and translation of results of image registration research to clinical practice. It may be concluded that the field of medical image registration has evolved, but still is in need of further development in various aspects. Copyright © 2016 Elsevier B.V. All rights reserved.
Multichannel blind iterative image restoration.
Sroubek, Filip; Flusser, Jan
2003-01-01
Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun.
Online prediction of organileptic data for snack food using color images
NASA Astrophysics Data System (ADS)
Yu, Honglu; MacGregor, John F.
2004-11-01
In this paper, a study for the prediction of organileptic properties of snack food in real-time using RGB color images is presented. The so-called organileptic properties, which are properties based on texture, taste and sight, are generally measured either by human sensory response or by mechanical devices. Neither of these two methods can be used for on-line feedback control in high-speed production. In this situation, a vision-based soft sensor is very attractive. By taking images of the products, the samples remain untouched and the product properties can be predicted in real time from image data. Four types of organileptic properties are considered in this study: blister level, toast points, taste and peak break force. Wavelet transform are applied on the color images and the averaged absolute value for each filtered image is used as texture feature variable. In order to handle the high correlation among the feature variables, Partial Least Squares (PLS) is used to regress the extracted feature variables against the four response variables.
Significant wave heights from Sentinel-1 SAR: Validation and applications
NASA Astrophysics Data System (ADS)
Stopa, J. E.; Mouche, A.
2017-03-01
Two empirical algorithms are developed for wave mode images measured from the synthetic aperture radar aboard Sentinel-1 A. The first method, called CWAVE_S1A, is an extension of previous efforts developed for ERS2 and the second method, called Fnn, uses the azimuth cutoff among other parameters to estimate significant wave heights (Hs) and average wave periods without using a modulation transfer function. Neural networks are trained using colocated data generated from WAVEWATCH III and independently verified with data from altimeters and in situ buoys. We use neural networks to relate the nonlinear relationships between the input SAR image parameters and output geophysical wave parameters. CWAVE_S1A performs well and has reduced precision compared to Fnn with Hs root mean square errors within 0.5 and 0.6 m, respectively. The developed neural networks extend the SAR's ability to retrieve useful wave information under a large range of environmental conditions including extratropical and tropical cyclones in which Hs estimation is traditionally challenging.
NASA Astrophysics Data System (ADS)
Renaud, Olivier; Heintzmann, Rainer; Sáez-Cirión, Asier; Schnelle, Thomas; Mueller, Torsten; Shorte, Spencer
2007-02-01
Three dimensional imaging provides high-content information from living intact biology, and can serve as a visual screening cue. In the case of single cell imaging the current state of the art uses so-called "axial through-stacking". However, three-dimensional axial through-stacking requires that the object (i.e. a living cell) be adherently stabilized on an optically transparent surface, usually glass; evidently precluding use of cells in suspension. Aiming to overcome this limitation we present here the utility of dielectric field trapping of single cells in three-dimensional electrode cages. Our approach allows gentle and precise spatial orientation and vectored rotation of living, non-adherent cells in fluid suspension. Using various modes of widefield, and confocal microscope imaging we show how so-called "microrotation" can provide a unique and powerful method for multiple point-of-view (three-dimensional) interrogation of intact living biological micro-objects (e.g. single-cells, cell aggregates, and embryos). Further, we show how visual screening by micro-rotation imaging can be combined with micro-fluidic sorting, allowing selection of rare phenotype targets from small populations of cells in suspension, and subsequent one-step single cell cloning (with high-viability). Our methodology combining high-content 3D visual screening with one-step single cell cloning, will impact diverse paradigms, for example cytological and cytogenetic analysis on haematopoietic stem cells, blood cells including lymphocytes, and cancer cells.
Markov random field based automatic image alignment for electron tomography.
Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark
2008-03-01
We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.
NASA Astrophysics Data System (ADS)
Osipov, Gennady
2013-04-01
We propose a solution to the problem of exploration of various mineral resource deposits, determination of their forms / classification of types (oil, gas, minerals, gold, etc.) with the help of satellite photography of the region of interest. Images received from satellite are processed and analyzed to reveal the presence of specific signs of deposits of various minerals. Course of data processing and making forecast can be divided into some stages: Pre-processing of images. Normalization of color and luminosity characteristics, determination of the necessary contrast level and integration of a great number of separate photos into a single map of the region are performed. Construction of semantic map image. Recognition of bitmapped image and allocation of objects and primitives known to system are realized. Intelligent analysis. At this stage acquired information is analyzed with the help of a knowledge base, which contain so-called "attention landscapes" of experts. Used methods of recognition and identification of images: a) combined method of image recognition, b)semantic analysis of posterized images, c) reconstruction of three-dimensional objects from bitmapped images, d)cognitive technology of processing and interpretation of images. This stage is fundamentally new and it distinguishes suggested technology from all others. Automatic registration of allocation of experts` attention - registration of so-called "attention landscape" of experts - is the base of the technology. Landscapes of attention are, essentially, highly effective filters that cut off unnecessary information and emphasize exactly the factors used by an expert for making a decision. The technology based on denoted principles involves the next stages, which are implemented in corresponding program agents. Training mode -> Creation of base of ophthalmologic images (OI) -> Processing and making generalized OI (GOI) -> Mode of recognition and interpretation of unknown images. Training mode includes noncontact registration of eye motion, reconstruction of "attention landscape" fixed by the expert, recording the comments of the expert who is a specialist in the field of images` interpretation, and transfer this information into knowledge base.Creation of base of ophthalmologic images (OI) includes making semantic contacts from great number of OI based on analysis of OI and expert's comments.Processing of OI and making generalized OI (GOI) is realized by inductive logic algorithms and consists in synthesis of structural invariants of OI. The mode of recognition and interpretation of unknown images consists of several stages, which include: comparison of unknown image with the base of structural invariants of OI; revealing of structural invariants in unknown images; ynthesis of interpretive message of the structural invariants base and OI base (the experts` comments stored in it). We want to emphasize that the training mode does not assume special involvement of experts to teach the system - it is realized in the process of regular experts` work on image interpretation and it becomes possible after installation of a special apparatus for non contact registration of experts` attention. Consequently, the technology, which principles is described there, provides fundamentally new effective solution to the problem of exploration of mineral resource deposits based on computer analysis of aerial and satellite image data.
NASA Astrophysics Data System (ADS)
Silvestri, Ludovico; Rudinskiy, Nikita; Paciscopi, Marco; Müllenbroich, Marie Caroline; Costantini, Irene; Sacconi, Leonardo; Frasconi, Paolo; Hyman, Bradley T.; Pavone, Francesco S.
2016-03-01
Mapping neuronal activity patterns across the whole brain with cellular resolution is a challenging task for state-of-the-art imaging methods. Indeed, despite a number of technological efforts, quantitative cellular-resolution activation maps of the whole brain have not yet been obtained. Many techniques are limited by coarse resolution or by a narrow field of view. High-throughput imaging methods, such as light sheet microscopy, can be used to image large specimens with high resolution and in reasonable times. However, the bottleneck is then moved from image acquisition to image analysis, since many TeraBytes of data have to be processed to extract meaningful information. Here, we present a full experimental pipeline to quantify neuronal activity in the entire mouse brain with cellular resolution, based on a combination of genetics, optics and computer science. We used a transgenic mouse strain (Arc-dVenus mouse) in which neurons which have been active in the last hours before brain fixation are fluorescently labelled. Samples were cleared with CLARITY and imaged with a custom-made confocal light sheet microscope. To perform an automatic localization of fluorescent cells on the large images produced, we used a novel computational approach called semantic deconvolution. The combined approach presented here allows quantifying the amount of Arc-expressing neurons throughout the whole mouse brain. When applied to cohorts of mice subject to different stimuli and/or environmental conditions, this method helps finding correlations in activity between different neuronal populations, opening the possibility to infer a sort of brain-wide 'functional connectivity' with cellular resolution.
NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
Abadi, Shima H; Tolstoy, Maya; Wilcock, William S D
2017-01-01
In order to mitigate against possible impacts of seismic surveys on baleen whales it is important to know as much as possible about the presence of whales within the vicinity of seismic operations. This study expands on previous work that analyzes single seismic streamer data to locate nearby calling baleen whales with a grid search method that utilizes the propagation angles and relative arrival times of received signals along the streamer. Three dimensional seismic reflection surveys use multiple towed hydrophone arrays for imaging the structure beneath the seafloor, providing an opportunity to significantly improve the uncertainty associated with streamer-generated call locations. All seismic surveys utilizing airguns conduct visual marine mammal monitoring surveys concurrent with the experiment, with powering-down of seismic source if a marine mammal is observed within the exposure zone. This study utilizes data from power-down periods of a seismic experiment conducted with two 8-km long seismic hydrophone arrays by the R/V Marcus G. Langseth near Alaska in summer 2011. Simulated and experiment data demonstrate that a single streamer can be utilized to resolve left-right ambiguity because the streamer is rarely perfectly straight in a field setting, but dual streamers provides significantly improved locations. Both methods represent a dramatic improvement over the existing Passive Acoustic Monitoring (PAM) system for detecting low frequency baleen whale calls, with ~60 calls detected utilizing the seismic streamers, zero of which were detected using the current R/V Langseth PAM system. Furthermore, this method has the potential to be utilized not only for improving mitigation processes, but also for studying baleen whale behavior within the vicinity of seismic operations.
Abadi, Shima H.; Tolstoy, Maya; Wilcock, William S. D.
2017-01-01
In order to mitigate against possible impacts of seismic surveys on baleen whales it is important to know as much as possible about the presence of whales within the vicinity of seismic operations. This study expands on previous work that analyzes single seismic streamer data to locate nearby calling baleen whales with a grid search method that utilizes the propagation angles and relative arrival times of received signals along the streamer. Three dimensional seismic reflection surveys use multiple towed hydrophone arrays for imaging the structure beneath the seafloor, providing an opportunity to significantly improve the uncertainty associated with streamer-generated call locations. All seismic surveys utilizing airguns conduct visual marine mammal monitoring surveys concurrent with the experiment, with powering-down of seismic source if a marine mammal is observed within the exposure zone. This study utilizes data from power-down periods of a seismic experiment conducted with two 8-km long seismic hydrophone arrays by the R/V Marcus G. Langseth near Alaska in summer 2011. Simulated and experiment data demonstrate that a single streamer can be utilized to resolve left-right ambiguity because the streamer is rarely perfectly straight in a field setting, but dual streamers provides significantly improved locations. Both methods represent a dramatic improvement over the existing Passive Acoustic Monitoring (PAM) system for detecting low frequency baleen whale calls, with ~60 calls detected utilizing the seismic streamers, zero of which were detected using the current R/V Langseth PAM system. Furthermore, this method has the potential to be utilized not only for improving mitigation processes, but also for studying baleen whale behavior within the vicinity of seismic operations. PMID:28199400
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.
1993-01-01
The Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, has developed a prototype interactive software system called the Spectral Image Processing System (SIPS) using IDL (the Interactive Data Language) on UNIX-based workstations. SIPS is designed to take advantage of the combination of high spectral resolution and spatial data presentation unique to imaging spectrometers. It streamlines analysis of these data by allowing scientists to rapidly interact with entire datasets. SIPS provides visualization tools for rapid exploratory analysis and numerical tools for quantitative modeling. The user interface is X-Windows-based, user friendly, and provides 'point and click' operation. SIPS is being used for multidisciplinary research concentrating on use of physically based analysis methods to enhance scientific results from imaging spectrometer data. The objective of this continuing effort is to develop operational techniques for quantitative analysis of imaging spectrometer data and to make them available to the scientific community prior to the launch of imaging spectrometer satellite systems such as the Earth Observing System (EOS) High Resolution Imaging Spectrometer (HIRIS).
Identification of Age-Related Macular Degeneration Using OCT Images
NASA Astrophysics Data System (ADS)
Arabi, Punal M., Dr; Krishna, Nanditha; Ashwini, V.; Prathibha, H. M.
2018-02-01
Age-related Macular Degeneration is the most leading retinal disease in the recent years. Macular degeneration occurs when the central portion of the retina, called macula deteriorates. As the deterioration occurs with the age, it is commonly referred as Age-related Macular Degeneration. This disease can be visualized by several imaging modalities such as Fundus imaging technique, Optical Coherence Tomography (OCT) technique and many other. Optical Coherence Tomography is the widely used technique for screening the Age-related Macular Degeneration disease, because it has an ability to detect the very minute changes in the retina. The Healthy and AMD affected OCT images are classified by extracting the Retinal Pigmented Epithelium (RPE) layer of the images using the image processing technique. The extracted layer is sampled, the no. of white pixels in each of the sample is counted and the mean value of the no. of pixels is calculated. The average mean value is calculated for both the Healthy and the AMD affected images and a threshold value is fixed and a decision rule is framed to classify the images of interest. The proposed method showed an accuracy of 75%.
NASA Astrophysics Data System (ADS)
Huang, Zhenghua; Zhang, Tianxu; Deng, Lihua; Fang, Hao; Li, Qian
2015-12-01
Total variation(TV) based on regularization has been proven as a popular and effective model for image restoration, because of its ability of edge preserved. However, as the TV favors a piece-wise constant solution, the processing results in the flat regions of the image are easily produced "staircase effects", and the amplitude of the edges will be underestimated; the underlying cause of the problem is that the regularization parameter can not be changeable with spatial local information of image. In this paper, we propose a novel Scatter-matrix eigenvalues-based TV(SMETV) regularization with image blind restoration algorithm for deblurring medical images. The spatial information in different image regions is incorporated into regularization by using the edge indicator called difference eigenvalue to distinguish edges from flat areas. The proposed algorithm can effectively reduce the noise in flat regions as well as preserve the edge and detailed information. Moreover, it becomes more robust with the change of the regularization parameter. Extensive experiments demonstrate that the proposed approach produces results superior to most methods in both visual image quality and quantitative measures.
Rock classification based on resistivity patterns in electrical borehole wall images
NASA Astrophysics Data System (ADS)
Linek, Margarete; Jungmann, Matthias; Berlage, Thomas; Pechnig, Renate; Clauser, Christoph
2007-06-01
Electrical borehole wall images represent grey-level-coded micro-resistivity measurements at the borehole wall. Different scientific methods have been implemented to transform image data into quantitative log curves. We introduce a pattern recognition technique applying texture analysis, which uses second-order statistics based on studying the occurrence of pixel pairs. We calculate so-called Haralick texture features such as contrast, energy, entropy and homogeneity. The supervised classification method is used for assigning characteristic texture features to different rock classes and assessing the discriminative power of these image features. We use classifiers obtained from training intervals to characterize the entire image data set recovered in ODP hole 1203A. This yields a synthetic lithology profile based on computed texture data. We show that Haralick features accurately classify 89.9% of the training intervals. We obtained misclassification for vesicular basaltic rocks. Hence, further image analysis tools are used to improve the classification reliability. We decompose the 2D image signal by the application of wavelet transformation in order to enhance image objects horizontally, diagonally and vertically. The resulting filtered images are used for further texture analysis. This combined classification based on Haralick features and wavelet transformation improved our classification up to a level of 98%. The application of wavelet transformation increases the consistency between standard logging profiles and texture-derived lithology. Texture analysis of borehole wall images offers the potential to facilitate objective analysis of multiple boreholes with the same lithology.
Pattern recognition neural-net by spatial mapping of biology visual field
NASA Astrophysics Data System (ADS)
Lin, Xin; Mori, Masahiko
2000-05-01
The method of spatial mapping in biology vision field is applied to artificial neural networks for pattern recognition. By the coordinate transform that is called the complex-logarithm mapping and Fourier transform, the input images are transformed into scale- rotation- and shift- invariant patterns, and then fed into a multilayer neural network for learning and recognition. The results of computer simulation and an optical experimental system are described.
Using a Video Camera to Measure the Radius of the Earth
ERIC Educational Resources Information Center
Carroll, Joshua; Hughes, Stephen
2013-01-01
A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…
Spectral Regression Discriminant Analysis for Hyperspectral Image Classification
NASA Astrophysics Data System (ADS)
Pan, Y.; Wu, J.; Huang, H.; Liu, J.
2012-08-01
Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.
Reflection imaging of the Moon's interior using deep-moonquake seismic interferometry
NASA Astrophysics Data System (ADS)
Nishitsuji, Yohei; Rowe, C. A.; Wapenaar, Kees; Draganov, Deyan
2016-04-01
The internal structure of the Moon has been investigated over many years using a variety of seismic methods, such as travel time analysis, receiver functions, and tomography. Here we propose to apply body-wave seismic interferometry to deep moonquakes in order to retrieve zero-offset reflection responses (and thus images) beneath the Apollo stations on the nearside of the Moon from virtual sources colocated with the stations. This method is called deep-moonquake seismic interferometry (DMSI). Our results show a laterally coherent acoustic boundary around 50 km depth beneath all four Apollo stations. We interpret this boundary as the lunar seismic Moho. This depth agrees with Japan Aerospace Exploration Agency's (JAXA) SELenological and Engineering Explorer (SELENE) result and previous travel time analysis at the Apollo 12/14 sites. The deeper part of the image we obtain from DMSI shows laterally incoherent structures. Such lateral inhomogeneity we interpret as representing a zone characterized by strong scattering and constant apparent seismic velocity at our resolution scale (0.2-2.0 Hz).
Positive-negative corresponding normalized ghost imaging based on an adaptive threshold
NASA Astrophysics Data System (ADS)
Li, G. L.; Zhao, Y.; Yang, Z. H.; Liu, X.
2016-11-01
Ghost imaging (GI) technology has attracted increasing attention as a new imaging technique in recent years. However, the signal-to-noise ratio (SNR) of GI with pseudo-thermal light needs to be improved before it meets engineering application demands. We therefore propose a new scheme called positive-negative correspondence normalized GI based on an adaptive threshold (PCNGI-AT) to achieve a good performance with less amount of data. In this work, we use both the advantages of normalized GI (NGI) and positive-negative correspondence GI (P-NCGI). The correctness and feasibility of the scheme were proved in theory before we designed an adaptive threshold selection method, in which the parameter of object signal selection conditions is replaced by the normalizing value. The simulation and experimental results reveal that the SNR of the proposed scheme is better than that of time-correspondence differential GI (TCDGI), avoiding the calculation of the matrix of correlation and reducing the amount of data used. The method proposed will make GI far more practical in engineering applications.
Land cover classification of Landsat 8 satellite data based on Fuzzy Logic approach
NASA Astrophysics Data System (ADS)
Taufik, Afirah; Sakinah Syed Ahmad, Sharifah
2016-06-01
The aim of this paper is to propose a method to classify the land covers of a satellite image based on fuzzy rule-based system approach. The study uses bands in Landsat 8 and other indices, such as Normalized Difference Water Index (NDWI), Normalized difference built-up index (NDBI) and Normalized Difference Vegetation Index (NDVI) as input for the fuzzy inference system. The selected three indices represent our main three classes called water, built- up land, and vegetation. The combination of the original multispectral bands and selected indices provide more information about the image. The parameter selection of fuzzy membership is performed by using a supervised method known as ANFIS (Adaptive neuro fuzzy inference system) training. The fuzzy system is tested for the classification on the land cover image that covers Klang Valley area. The results showed that the fuzzy system approach is effective and can be explored and implemented for other areas of Landsat data.
NASA Astrophysics Data System (ADS)
Datteri, Ryan; Asman, Andrew J.; Landman, Bennett A.; Dawant, Benoit M.
2014-03-01
Multi-atlas registration-based segmentation is a popular technique in the medical imaging community, used to transform anatomical and functional information from a set of atlases onto a new patient that lacks this information. The accuracy of the projected information on the target image is dependent on the quality of the registrations between the atlas images and the target image. Recently, we have developed a technique called AQUIRC that aims at estimating the error of a non-rigid registration at the local level and was shown to correlate to error in a simulated case. Herein, we extend upon this work by applying AQUIRC to atlas selection at the local level across multiple structures in cases in which non-rigid registration is difficult. AQUIRC is applied to 6 structures, the brainstem, optic chiasm, left and right optic nerves, and the left and right eyes. We compare the results of AQUIRC to that of popular techniques, including Majority Vote, STAPLE, Non-Local STAPLE, and Locally-Weighted Vote. We show that AQUIRC can be used as a method to combine multiple segmentations and increase the accuracy of the projected information on a target image, and is comparable to cutting edge methods in the multi-atlas segmentation field.
NASA Astrophysics Data System (ADS)
Liba, Orly; Sorelle, Elliott D.; Sen, Debasish; de La Zerda, Adam
2016-03-01
Optical Coherence Tomography (OCT) enables real-time imaging of living tissues at cell-scale resolution over millimeters in three dimensions. Despite these advantages, functional biological studies with OCT have been limited by a lack of exogenous contrast agents that can be distinguished from tissue. Here we report an approach to functional OCT imaging that implements custom algorithms to spectrally identify unique contrast agents: large gold nanorods (LGNRs). LGNRs exhibit 110-fold greater spectral signal per particle than conventional GNRs, which enables detection of individual LGNRs in water and concentrations as low as 250 pM in the circulation of living mice. This translates to ~40 particles per imaging voxel in vivo. Unlike previous implementations of OCT spectral detection, the methods described herein adaptively compensate for depth and processing artifacts on a per sample basis. Collectively, these methods enable high-quality noninvasive contrast-enhanced imaging of OCT in living subjects, including detection of tumor microvasculature at twice the depth achievable with conventional OCT. Additionally, multiplexed detection of spectrally-distinct LGNRs was demonstrated to observe discrete patterns of lymphatic drainage and identify individual lymphangions and lymphatic valve functional states. These capabilities provide a powerful platform for molecular imaging and characterization of tissue noninvasively at cellular resolution, called MOZART.
From the RSNA refresher courses: Image-guided thermal therapy of uterine fibroids.
Tempany, Clare M
2007-01-01
One of the most recent additions to the methods for image-guided therapy is magnetic resonance (MR)-guided focused ultrasound. This method represents a unique closed-loop therapy, with planning, guidance, control, and direct feedback (called MR thermometry), which work together to ensure an effective therapy. The focused ultrasound induces focal tissue destruction by thermocoagulation in a noninvasive manner. MR also enables real-time thermometry to be performed during each and every sonication. These characteristics make MR-guided focused ultrasound an exciting new approach for treating fibroids. Fibroids are diagnosed based on findings from the patient's physical examination supplemented by imaging results. MR imaging is preferred to other imaging modalities because it enables the fibroids and the entire pelvis to be fully examined. After individual fibroids are identified and the target area is defined by the radiologist, the target volume is analyzed in a three-dimensional assessment to ensure the patient's safety. The procedure begins with the delivery of low-power sonication, and the power is gradually increased until the therapeutic dose is reached. After the procedure, postcontrast images are acquired; these should demonstrate tissue necrosis. The results of clinical trials have shown that the treatment is safe, effective, and highly acceptable to patients. RSNA, 2007
Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo
2011-04-01
The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.
NASA Astrophysics Data System (ADS)
Hayes, Brian
1994-12-01
Gleaning further clues to the structure of the universe will require larger data samples. To that end, a major new survey of the skies called the Sloan Digital Star Survey (SDSS), is in preparation. It will catalog some 50 million galaxies and about 70 million stars. A new 2.5 meter telescope to be erected at Apache Point Observatory in New Mexico will be dedicated to the survey. The telescope is not the key innovation that will make the survey possible. The crucial factor is the technology for digitally recording large numbers of images and spectra and for automating the analysis, recognition, and classification of those images and spectra. The methods to be used are discussed.
A new approach of watermarking technique by means multichannel wavelet functions
NASA Astrophysics Data System (ADS)
Agreste, Santa; Puccio, Luigia
2012-12-01
The digital piracy involving images, music, movies, books, and so on, is a legal problem that has not found a solution. Therefore it becomes crucial to create and to develop methods and numerical algorithms in order to solve the copyright problems. In this paper we focus the attention on a new approach of watermarking technique applied to digital color images. Our aim is to describe the realized watermarking algorithm based on multichannel wavelet functions with multiplicity r = 3, called MCWM 1.0. We report a large experimentation and some important numerical results in order to show the robustness of the proposed algorithm to geometrical attacks.
Endoscopic ultrasound: Elastographic lymph node evaluation.
Dietrich, Christoph F; Jenssen, Christian; Arcidiacono, Paolo G; Cui, Xin-Wu; Giovannini, Marc; Hocke, Michael; Iglesias-Garcia, Julio; Saftoiu, Adrian; Sun, Siyu; Chiorean, Liliana
2015-01-01
Different imaging techniques can bring different information which will contribute to the final diagnosis and further management of the patients. Even from the time of Hippocrates, palpation has been used in order to detect and characterize a body mass. The so-called virtual palpation has now become a reality due to elastography, which is a recently developed technique. Elastography has already been proving its added value as a complementary imaging method, helpful to better characterize and differentiate between benign and malignant masses. The current applications of elastography in lymph nodes (LNs) assessment by endoscopic ultrasonography will be further discussed in this paper, with a review of the literature and future perspectives.
Evaluation of intrinsic respiratory signal determination methods for 4D CBCT adapted for mice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Rachael; Pan, Tinsu, E-mail: tpan@mdanderson.org; Rubinstein, Ashley
Purpose: 4D CT imaging in mice is important in a variety of areas including studies of lung function and tumor motion. A necessary step in 4D imaging is obtaining a respiratory signal, which can be done through an external system or intrinsically through the projection images. A number of methods have been developed that can successfully determine the respiratory signal from cone-beam projection images of humans, however only a few have been utilized in a preclinical setting and most of these rely on step-and-shoot style imaging. The purpose of this work is to assess and make adaptions of several successfulmore » methods developed for humans for an image-guided preclinical radiation therapy system. Methods: Respiratory signals were determined from the projection images of free-breathing mice scanned on the X-RAD system using four methods: the so-called Amsterdam shroud method, a method based on the phase of the Fourier transform, a pixel intensity method, and a center of mass method. The Amsterdam shroud method was modified so the sharp inspiration peaks associated with anesthetized mouse breathing could be detected. Respiratory signals were used to sort projections into phase bins and 4D images were reconstructed. Error and standard deviation in the assignment of phase bins for the four methods compared to a manual method considered to be ground truth were calculated for a range of region of interest (ROI) sizes. Qualitative comparisons were additionally made between the 4D images obtained using each of the methods and the manual method. Results: 4D images were successfully created for all mice with each of the respiratory signal extraction methods. Only minimal qualitative differences were noted between each of the methods and the manual method. The average error (and standard deviation) in phase bin assignment was 0.24 ± 0.08 (0.49 ± 0.11) phase bins for the Fourier transform method, 0.09 ± 0.03 (0.31 ± 0.08) phase bins for the modified Amsterdam shroud method, 0.09 ± 0.02 (0.33 ± 0.07) phase bins for the intensity method, and 0.37 ± 0.10 (0.57 ± 0.08) phase bins for the center of mass method. Little dependence on ROI size was noted for the modified Amsterdam shroud and intensity methods while the Fourier transform and center of mass methods showed a noticeable dependence on the ROI size. Conclusions: The modified Amsterdam shroud, Fourier transform, and intensity respiratory signal methods are sufficiently accurate to be used for 4D imaging on the X-RAD system and show improvement over the existing center of mass method. The intensity and modified Amsterdam shroud methods are recommended due to their high accuracy and low dependence on ROI size.« less
Hashimoto, Shinichi; Ogihara, Hiroyuki; Suenaga, Masato; Fujita, Yusuke; Terai, Shuji; Hamamoto, Yoshihiko; Sakaida, Isao
2017-08-01
Visibility in capsule endoscopic images is presently evaluated through intermittent analysis of frames selected by a physician. It is thus subjective and not quantitative. A method to automatically quantify the visibility on capsule endoscopic images has not been reported. Generally, when designing automated image recognition programs, physicians must provide a training image; this process is called supervised learning. We aimed to develop a novel automated self-learning quantification system to identify visible areas on capsule endoscopic images. The technique was developed using 200 capsule endoscopic images retrospectively selected from each of three patients. The rate of detection of visible areas on capsule endoscopic images between a supervised learning program, using training images labeled by a physician, and our novel automated self-learning program, using unlabeled training images without intervention by a physician, was compared. The rate of detection of visible areas was equivalent for the supervised learning program and for our automatic self-learning program. The visible areas automatically identified by self-learning program correlated to the areas identified by an experienced physician. We developed a novel self-learning automated program to identify visible areas in capsule endoscopic images.
Using an image-extended relational database to support content-based image retrieval in a PACS.
Traina, Caetano; Traina, Agma J M; Araújo, Myrian R B; Bueno, Josiane M; Chino, Fabio J T; Razente, Humberto; Azevedo-Marques, Paulo M
2005-12-01
This paper presents a new Picture Archiving and Communication System (PACS), called cbPACS, which has content-based image retrieval capabilities. The cbPACS answers range and k-nearest- neighbor similarity queries, employing a relational database manager extended to support images. The images are compared through their features, which are extracted by an image-processing module and stored in the extended relational database. The database extensions were developed aiming at efficiently answering similarity queries by taking advantage of specialized indexing methods. The main concept supporting the extensions is the definition, inside the relational manager, of distance functions based on features extracted from the images. An extension to the SQL language enables the construction of an interpreter that intercepts the extended commands and translates them to standard SQL, allowing any relational database server to be used. By now, the system implemented works on features based on color distribution of the images through normalized histograms as well as metric histograms. Metric histograms are invariant regarding scale, translation and rotation of images and also to brightness transformations. The cbPACS is prepared to integrate new image features, based on texture and shape of the main objects in the image.
Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.
Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C
2014-02-01
It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.
Garnier, A; Poncet, F; Billette De Villemeur, A; Exbrayat, C; Bon, M F; Chevalier, A; Salicru, B; Tournegros, J M
2009-06-01
The screening program guidelines specify that the call back rate of women for additional imaging (positive mammogram) should not exceed 7% at initial screening, and 5% at subsequent screening. Materials and methods. Results in the Isere region (12%) have prompted a review of the correlation between the call back rate and indicators of quality (detection rate, sensitivity, specificity, positive predictive value) for the radiologists providing interpretations during that time period. Three groups of radiologists were identified: the group with call back rate of 10% achieved the best results (sensitivity: 92%, detection rate: 0.53%, specificity: 90%). The group with lowest call back rate (7.7%) showed insufficient sensitivity (58%). The last group with call back rate of 18.3%, showed no improvement in sensitivity (82%) and detection rate (0.53%), but showed reduced specificity (82%). The protocol update in 2001 does not resolve this problematic situation and national results continue to demonstrate a high percentage of positive screening mammograms. A significant increase in the number of positive screening examinations compared to recommended guidelines is not advantageous and leads to an overall decrease in the quality of the screening.
Multiple Sparse Representations Classification
Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik
2015-01-01
Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and sparsity level. PMID:26177106
Enhanced facial recognition for thermal imagery using polarimetric imaging.
Gurton, Kristan P; Yuffa, Alex J; Videen, Gorden W
2014-07-01
We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image-forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidIR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called "ghosting" effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. Polarimetric image sets considered include the conventional thermal intensity image, S0, the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization image.
Time reversal acoustics for small targets using decomposition of the time reversal operator
NASA Astrophysics Data System (ADS)
Simko, Peter C.
The method of time reversal acoustics has been the focus of considerable interest over the last twenty years. Time reversal imaging methods have made consistent progress as effective methods for signal processing since the initial demonstration that physical time reversal methods can be used to form convergent wave fields on a localized target, even under conditions of severe multipathing. Computational time reversal methods rely on the properties of the so-called 'time reversal operator' in order to extract information about the target medium. Applications for which time reversal imaging have previously been explored include medical imaging, non-destructive evaluation, and mine detection. Emphasis in this paper will fall on two topics within the general field of computational time reversal imaging. First, we will examine previous work on developing a time reversal imaging algorithm based on the MUltiple SIgnal Classification (MUSIC) algorithm. MUSIC, though computationally very intensive, has demonstrated early promise in simulations using array-based methods applicable to true volumetric (three-dimensional) imaging. We will provide a simple algorithm through which the rank of the time reversal operator subspaces can be properly quantified so that the rank of the associated null subspace can be accurately estimated near the central pulse wavelength in broadband imaging. Second, we will focus on the scattering from small acoustically rigid two dimensional cylindrical targets of elliptical cross section. Analysis of the time reversal operator eigenmodes has been well-studied for symmetric response matrices associated with symmetric systems of scattering targets. We will expand these previous results to include more general scattering systems leading to asymmetric response matrices, for which the analytical complexity increases but the physical interpretation of the time reversal operator remains unchanged. For asymmetric responses, the qualitative properties of the time reversal operator eigenmodes remain consistent with those obtained from the more tightly constrained systems.
Multiple Image Arrangement for Subjective Quality Assessment
NASA Astrophysics Data System (ADS)
Wang, Yan; Zhai, Guangtao
2017-12-01
Subjective quality assessment serves as the foundation for almost all visual quality related researches. Size of the image quality databases has expanded from dozens to thousands in the last decades. Since each subjective rating therein has to be averaged over quite a few participants, the ever-increasing overall size of those databases calls for an evolution of existing subjective test methods. Traditional single/double stimulus based approaches are being replaced by multiple image tests, where several distorted versions of the original one are displayed and rated at once. And this naturally brings upon the question of how to arrange those multiple images on screen during the test. In this paper, we answer this question by performing subjective viewing test with eye tracker for different types arrangements. Our research indicates that isometric arrangement imposes less duress on participants and has more uniform distribution of eye fixations and movements and therefore is expected to generate more reliable subjective ratings.
A user's guide to localization-based super-resolution fluorescence imaging.
Dempsey, Graham T
2013-01-01
Advances in far-field fluorescence microscopy over the past decade have led to the development of super-resolution imaging techniques that provide more than an order of magnitude improvement in spatial resolution compared to conventional light microscopy. One such approach, called Stochastic Optical Reconstruction Microscopy (STORM) uses the sequential, nanometer-scale localization of individual fluorophores to reconstruct a high-resolution image of a structure of interest. This is an attractive method for biological investigation at the nanoscale due to its relative simplicity, both conceptually and practically in the laboratory. Like most research tools, however, the devil is in the details. The aim of this chapter is to serve as a guide for applying STORM to the study of biological samples. This chapter will discuss considerations for choosing a photoswitchable fluorescent probe, preparing a sample, selecting hardware for data acquisition, and collecting and analyzing data for image reconstruction. Copyright © 2013 Elsevier Inc. All rights reserved.
Targeted post-mortem computed tomography cardiac angiography: proof of concept.
Saunders, Sarah L; Morgan, Bruno; Raj, Vimal; Robinson, Claire E; Rutty, Guy N
2011-07-01
With the increasing use and availability of multi-detector computed tomography and magnetic resonance imaging in autopsy practice, there has been an international push towards the development of the so-called near virtual autopsy. However, currently, a significant obstacle to the consideration as to whether or not near virtual autopsies could one day replace the conventional invasive autopsy is the failure of post-mortem imaging to yield detailed information concerning the coronary arteries. To date, a cost-effective, practical solution to allow high throughput imaging has not been presented within the forensic literature. We present a proof of concept paper describing a simple, quick, cost-effective, manual, targeted in situ post-mortem cardiac angiography method using a minimally invasive approach, to be used with multi-detector computed tomography for high throughput cadaveric imaging which can be used in permanent or temporary mortuaries.
Improved photo response non-uniformity (PRNU) based source camera identification.
Cooper, Alan J
2013-03-10
The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.
2013-01-01
Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203
Wavelet-space correlation imaging for high-speed MRI without motion monitoring or data segmentation.
Li, Yu; Wang, Hui; Tkach, Jean; Roach, David; Woods, Jason; Dumoulin, Charles
2015-12-01
This study aims to (i) develop a new high-speed MRI approach by implementing correlation imaging in wavelet-space, and (ii) demonstrate the ability of wavelet-space correlation imaging to image human anatomy with involuntary or physiological motion. Correlation imaging is a high-speed MRI framework in which image reconstruction relies on quantification of data correlation. The presented work integrates correlation imaging with a wavelet transform technique developed originally in the field of signal and image processing. This provides a new high-speed MRI approach to motion-free data collection without motion monitoring or data segmentation. The new approach, called "wavelet-space correlation imaging", is investigated in brain imaging with involuntary motion and chest imaging with free-breathing. Wavelet-space correlation imaging can exceed the speed limit of conventional parallel imaging methods. Using this approach with high acceleration factors (6 for brain MRI, 16 for cardiac MRI, and 8 for lung MRI), motion-free images can be generated in static brain MRI with involuntary motion and nonsegmented dynamic cardiac/lung MRI with free-breathing. Wavelet-space correlation imaging enables high-speed MRI in the presence of involuntary motion or physiological dynamics without motion monitoring or data segmentation. © 2014 Wiley Periodicals, Inc.
Shear wave speed recovery in transient elastography and supersonic imaging using propagating fronts
NASA Astrophysics Data System (ADS)
McLaughlin, Joyce; Renzi, Daniel
2006-04-01
Transient elastography and supersonic imaging are promising new techniques for characterizing the elasticity of soft tissues. Using this method, an 'ultrafast imaging' system (up to 10 000 frames s-1) follows in real time the propagation of a low frequency shear wave. The displacement of the propagating shear wave is measured as a function of time and space. The objective of this paper is to develop and test algorithms whose ultimate product is images of the shear wave speed of tissue mimicking phantoms. The data used in the algorithms are the front of the propagating shear wave. Here, we first develop techniques to find the arrival time surface given the displacement data from a transient elastography experiment. The arrival time surface satisfies the Eikonal equation. We then propose a family of methods, called distance methods, to solve the inverse Eikonal equation: given the arrival times of a propagating wave, find the wave speed. Lastly, we explain why simple inversion schemes for the inverse Eikonal equation lead to large outliers in the wave speed and numerically demonstrate that the new scheme presented here does not have any large outliers. We exhibit two recoveries using these methods: one is with synthetic data; the other is with laboratory data obtained by Mathias Fink's group (the Laboratoire Ondes et Acoustique, ESPCI, Université Paris VII).
Machine learning in a graph framework for subcortical segmentation
NASA Astrophysics Data System (ADS)
Guo, Zhihui; Kashyap, Satyananda; Sonka, Milan; Oguz, Ipek
2017-02-01
Automated and reliable segmentation of subcortical structures from human brain magnetic resonance images is of great importance for volumetric and shape analyses in quantitative neuroimaging studies. However, poor boundary contrast and variable shape of these structures make the automated segmentation a tough task. We propose a 3D graph-based machine learning method, called LOGISMOS-RF, to segment the caudate and the putamen from brain MRI scans in a robust and accurate way. An atlas-based tissue classification and bias-field correction method is applied to the images to generate an initial segmentation for each structure. Then a 3D graph framework is utilized to construct a geometric graph for each initial segmentation. A locally trained random forest classifier is used to assign a cost to each graph node. The max-flow algorithm is applied to solve the segmentation problem. Evaluation was performed on a dataset of T1-weighted MRI's of 62 subjects, with 42 images used for training and 20 images for testing. For comparison, FreeSurfer, FSL and BRAINSCut approaches were also evaluated using the same dataset. Dice overlap coefficients and surface-to-surfaces distances between the automated segmentation and expert manual segmentations indicate the results of our method are statistically significantly more accurate than the three other methods, for both the caudate (Dice: 0.89 +/- 0.03) and the putamen (0.89 +/- 0.03).
Imaging Tests for Lower Back Pain: When You Need Them -- and When You Don't
... Geriatric Imaging Tests for Lower-Back Back Pain Imaging Tests for Lower-Back Pain You probably do ... X-rays, CT scans, and MRIs are called imaging tests because they take pictures, or images, of ...
Feature extraction and classification of clouds in high resolution panchromatic satellite imagery
NASA Astrophysics Data System (ADS)
Sharghi, Elan
The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.
Mosaic CCD method: A new technique for observing dynamics of cometary magnetospheres
NASA Technical Reports Server (NTRS)
Saito, T.; Takeuchi, H.; Kozuba, Y.; Okamura, S.; Konno, I.; Hamabe, M.; Aoki, T.; Minami, S.; Isobe, S.
1992-01-01
On April 29, 1990, the plasma tail of Comet Austin was observed with a CCD camera on the 105-cm Schmidt telescope at the Kiso Observatory of the University of Tokyo. The area of the CCD used in this observation is only about 1 sq cm. When this CCD is used on the 105-cm Schmidt telescope at the Kiso Observatory, the area corresponds to a narrow square view of 12 ft x 12 ft. By comparison with the photograph of Comet Austin taken by Numazawa (personal communication) on the same night, we see that only a small part of the plasma tail can be photographed at one time with the CCD. However, by shifting the view on the CCD after each exposure, we succeeded in imaging the entire length of the cometary magnetosphere of 1.6 x 10(exp 6) km. This new technique is called 'the mosaic CCD method'. In order to study the dynamics of cometary plasma tails, seven frames of the comet from the head to the tail region were twice imaged with the mosaic CCD method and two sets of images were obtained. Six microstructures, including arcade structures, were identified in both the images. Sketches of the plasma tail including microstructures are included.
SET: a pupil detection method using sinusoidal approximation
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641
Multi-layer cube sampling for liver boundary detection in PET-CT images.
Liu, Xinxin; Yang, Jian; Song, Shuang; Song, Hong; Ai, Danni; Zhu, Jianjun; Jiang, Yurong; Wang, Yongtian
2018-06-01
Liver metabolic information is considered as a crucial diagnostic marker for the diagnosis of fever of unknown origin, and liver recognition is the basis of automatic diagnosis of metabolic information extraction. However, the poor quality of PET and CT images is a challenge for information extraction and target recognition in PET-CT images. The existing detection method cannot meet the requirement of liver recognition in PET-CT images, which is the key problem in the big data analysis of PET-CT images. A novel texture feature descriptor called multi-layer cube sampling (MLCS) is developed for liver boundary detection in low-dose CT and PET images. The cube sampling feature is proposed for extracting more texture information, which uses a bi-centric voxel strategy. Neighbour voxels are divided into three regions by the centre voxel and the reference voxel in the histogram, and the voxel distribution information is statistically classified as texture feature. Multi-layer texture features are also used to improve the ability and adaptability of target recognition in volume data. The proposed feature is tested on the PET and CT images for liver boundary detection. For the liver in the volume data, mean detection rate (DR) and mean error rate (ER) reached 95.15 and 7.81% in low-quality PET images, and 83.10 and 21.08% in low-contrast CT images. The experimental results demonstrated that the proposed method is effective and robust for liver boundary detection.
Dental magnetic resonance imaging: making the invisible visible.
Idiyatullin, Djaudat; Corum, Curt; Moeller, Steen; Prasad, Hari S; Garwood, Michael; Nixdorf, Donald R
2011-06-01
Clinical dentistry is in need of noninvasive and accurate diagnostic methods to better evaluate dental pathosis. The purpose of this work was to assess the feasibility of a recently developed magnetic resonance imaging (MRI) technique, called SWeep Imaging with Fourier Transform (SWIFT), to visualize dental tissues. Three in vitro teeth, representing a limited range of clinical conditions of interest, imaged using a 9.4T system with scanning times ranging from 100 seconds to 25 minutes. In vivo imaging of a subject was performed using a 4T system with a 10-minute scanning time. SWIFT images were compared with traditional two-dimensional radiographs, three-dimensional cone-beam computed tomography (CBCT) scanning, gradient-echo MRI technique, and histological sections. A resolution of 100 μm was obtained from in vitro teeth. SWIFT also identified the presence and extent of dental caries and fine structures of the teeth, including cracks and accessory canals, which are not visible with existing clinical radiography techniques. Intraoral positioning of the radiofrequency coil produced initial images of multiple adjacent teeth at a resolution of 400 μm. SWIFT MRI offers simultaneous three-dimensional hard- and soft-tissue imaging of teeth without the use of ionizing radiation. Furthermore, it has the potential to image minute dental structures within clinically relevant scanning times. This technology has implications for endodontists because it offers a potential method to longitudinally evaluate teeth where pulp and root structures have been regenerated. Copyright © 2011 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Adaptive Intuitionistic Fuzzy Enhancement of Brain Tumor MR Images
NASA Astrophysics Data System (ADS)
Deng, He; Deng, Wankai; Sun, Xianping; Ye, Chaohui; Zhou, Xin
2016-10-01
Image enhancement techniques are able to improve the contrast and visual quality of magnetic resonance (MR) images. However, conventional methods cannot make up some deficiencies encountered by respective brain tumor MR imaging modes. In this paper, we propose an adaptive intuitionistic fuzzy sets-based scheme, called as AIFE, which takes information provided from different MR acquisitions and tries to enhance the normal and abnormal structural regions of the brain while displaying the enhanced results as a single image. The AIFE scheme firstly separates an input image into several sub images, then divides each sub image into object and background areas. After that, different novel fuzzification, hyperbolization and defuzzification operations are implemented on each object/background area, and finally an enhanced result is achieved via nonlinear fusion operators. The fuzzy implementations can be processed in parallel. Real data experiments demonstrate that the AIFE scheme is not only effectively useful to have information from images acquired with different MR sequences fused in a single image, but also has better enhancement performance when compared to conventional baseline algorithms. This indicates that the proposed AIFE scheme has potential for improving the detection and diagnosis of brain tumors.
Progressive transmission of pseudo-color images. Appendix 1: Item 4. M.S. Thesis
NASA Technical Reports Server (NTRS)
Hadenfeldt, Andrew C.
1991-01-01
The transmission of digital images can require considerable channel bandwidth. The cost of obtaining such a channel can be prohibitive, or the channel might simply not be available. In this case, progressive transmission (PT) can be useful. PT presents the user with a coarse initial image approximation, and then proceeds to refine it. In this way, the user tends to receive information about the content of the image sooner than if a sequential transmission method is used. PT finds application in image data base browsing, teleconferencing, medical and other applications. A PT scheme is developed for use with a particular type of image data, the pseudo-color or color mapped image. Such images consist of a table of colors called a colormap, plus a 2-D array of index values which indicate which colormap entry is to be used to display a given pixel. This type of image presents some unique problems for a PT coder, and techniques for overcoming these problems are developed. A computer simulation of the color mapped PT scheme is developed to evaluate its performance. Results of simulation using several test images are presented.
Multi-modality image registration for effective thermographic fever screening
NASA Astrophysics Data System (ADS)
Dwith, C. Y. N.; Ghassemi, Pejhman; Pfefer, Joshua; Casamento, Jon; Wang, Quanzeng
2017-02-01
Fever screening based on infrared thermographs (IRTs) is a viable mass screening approach during infectious disease pandemics, such as Ebola and Severe Acute Respiratory Syndrome (SARS), for temperature monitoring in public places like hospitals and airports. IRTs have been found to be powerful, quick and non-invasive methods for detecting elevated temperatures. Moreover, regions medially adjacent to the inner canthi (called the canthi regions in this paper) are preferred sites for fever screening. Accurate localization of the canthi regions can be achieved through multi-modality registration of infrared (IR) and white-light images. Here we propose a registration method through a coarse-fine registration strategy using different registration models based on landmarks and edge detection on eye contours. We have evaluated the registration accuracy to be within +/- 2.7 mm, which enables accurate localization of the canthi regions.
Wu, L C; D'Amelio, F; Fox, R A; Polyakov, I; Daunton, N G
1997-06-06
The present report describes a desktop computer-based method for the quantitative assessment of the area occupied by immunoreactive terminals in close apposition to nerve cells in relation to the perimeter of the cell soma. This method is based on Fast Fourier Transform (FFT) routines incorporated in NIH-Image public domain software. Pyramidal cells of layer V of the somatosensory cortex outlined by GABA immunolabeled terminals were chosen for our analysis. A Leitz Diaplan light microscope was employed for the visualization of the sections. A Sierra Scientific Model 4030 CCD camera was used to capture the images into a Macintosh Centris 650 computer. After preprocessing, filtering was performed on the power spectrum in the frequency domain produced by the FFT operation. An inverse FFT with filter procedure was employed to restore the images to the spatial domain. Pasting of the original image to the transformed one using a Boolean logic operation called 'AND'ing produced an image with the terminals enhanced. This procedure allowed the creation of a binary image using a well-defined threshold of 128. Thus, the terminal area appears in black against a white background. This methodology provides an objective means of measurement of area by counting the total number of pixels occupied by immunoreactive terminals in light microscopic sections in which the difficulties of labeling intensity, size, shape and numerical density of terminals are avoided.
NASA Technical Reports Server (NTRS)
Wu, L. C.; D'Amelio, F.; Fox, R. A.; Polyakov, I.; Daunton, N. G.
1997-01-01
The present report describes a desktop computer-based method for the quantitative assessment of the area occupied by immunoreactive terminals in close apposition to nerve cells in relation to the perimeter of the cell soma. This method is based on Fast Fourier Transform (FFT) routines incorporated in NIH-Image public domain software. Pyramidal cells of layer V of the somatosensory cortex outlined by GABA immunolabeled terminals were chosen for our analysis. A Leitz Diaplan light microscope was employed for the visualization of the sections. A Sierra Scientific Model 4030 CCD camera was used to capture the images into a Macintosh Centris 650 computer. After preprocessing, filtering was performed on the power spectrum in the frequency domain produced by the FFT operation. An inverse FFT with filter procedure was employed to restore the images to the spatial domain. Pasting of the original image to the transformed one using a Boolean logic operation called 'AND'ing produced an image with the terminals enhanced. This procedure allowed the creation of a binary image using a well-defined threshold of 128. Thus, the terminal area appears in black against a white background. This methodology provides an objective means of measurement of area by counting the total number of pixels occupied by immunoreactive terminals in light microscopic sections in which the difficulties of labeling intensity, size, shape and numerical density of terminals are avoided.
The 2016 interferometric imaging beauty contest
NASA Astrophysics Data System (ADS)
Sanchez-Bermudez, J.; Thiébaut, E.; Hofmann, K.-H.; Heininger, M.; Schertl, D.; Weigelt, G.; Millour, F.; Schutz, A.; Ferrari, A.; Vannier, M.; Mary, D.; Young, J.
2016-08-01
Image reconstruction in optical interferometry has gained considerable importance for astrophysical studies during the last decade. This has been mainly due to improvements in the imaging capabilities of existing interferometers and the expectation of new facilities in the coming years. However, despite the advances made so far, image synthesis in optical interferometry is still an open field of research. Since 2004, the community has organized a biennial contest to formally test the different methods and algorithms for image reconstruction. In 2016, we celebrated the 7th edition of the "Interferometric Imaging Beauty Contest". This initiative represented an open call to participate in the reconstruction of a selected set of simulated targets with a wavelength-dependent morphology as they could be observed by the 2nd generation of VLTI instruments. This contest represents a unique opportunity to benchmark, in a systematic way, the current advances and limitations in the field, as well as to discuss possible future approaches. In this contribution, we summarize: (a) the rules of the 2016 contest; (b) the different data sets used and the selection procedure; (c) the methods and results obtained by each one of the participants; and (d) the metric used to select the best reconstructed images. Finally, we named Karl-Heinz Hofmann and the group of the Max-Planck-Institut fur Radioastronomie as winners of this edition of the contest.
Bio-inspired color image enhancement
NASA Astrophysics Data System (ADS)
Meylan, Laurence; Susstrunk, Sabine
2004-06-01
Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.
NASA Astrophysics Data System (ADS)
Linek, M.; Jungmann, M.; Berlage, T.; Clauser, C.
2005-12-01
Within the Ocean Drilling Program (ODP), image logging tools have been routinely deployed such as the Formation MicroScanner (FMS) or the Resistivity-At-Bit (RAB) tools. Both logging methods are based on resistivity measurements at the borehole wall and therefore are sensitive to conductivity contrasts, which are mapped in color scale images. These images are commonly used to study the structure of the sedimentary rocks and the oceanic crust (petrologic fabric, fractures, veins, etc.). So far, mapping of lithology from electrical images is purely based on visual inspection and subjective interpretation. We apply digital image analysis on electrical borehole wall images in order to develop a method, which augments objective rock identification. We focus on supervised textural pattern recognition which studies the spatial gray level distribution with respect to certain rock types. FMS image intervals of rock classes known from core data are taken in order to train textural characteristics for each class. A so-called gray level co-occurrence matrix is computed by counting the occurrence of a pair of gray levels that are a certain distant apart. Once the matrix for an image interval is computed, we calculate the image contrast, homogeneity, energy, and entropy. We assign characteristic textural features to different rock types by reducing the image information into a small set of descriptive features. Once a discriminating set of texture features for each rock type is found, we are able to discriminate the entire FMS images regarding the trained rock type classification. A rock classification based on texture features enables quantitative lithology mapping and is characterized by a high repeatability, in contrast to a purely visual subjective image interpretation. We show examples for the rock classification between breccias, pillows, massive units, and horizontally bedded tuffs based on ODP image data.
Cardiac-gated parametric images from 82 Rb PET from dynamic frames and direct 4D reconstruction.
Germino, Mary; Carson, Richard E
2018-02-01
Cardiac perfusion PET data can be reconstructed as a dynamic sequence and kinetic modeling performed to quantify myocardial blood flow, or reconstructed as static gated images to quantify function. Parametric images from dynamic PET are conventionally not gated, to allow use of all events with lower noise. An alternative method for dynamic PET is to incorporate the kinetic model into the reconstruction algorithm itself, bypassing the generation of a time series of emission images and directly producing parametric images. So-called "direct reconstruction" can produce parametric images with lower noise than the conventional method because the noise distribution is more easily modeled in projection space than in image space. In this work, we develop direct reconstruction of cardiac-gated parametric images for 82 Rb PET with an extension of the Parametric Motion compensation OSEM List mode Algorithm for Resolution-recovery reconstruction for the one tissue model (PMOLAR-1T). PMOLAR-1T was extended to accommodate model terms to account for spillover from the left and right ventricles into the myocardium. The algorithm was evaluated on a 4D simulated 82 Rb dataset, including a perfusion defect, as well as a human 82 Rb list mode acquisition. The simulated list mode was subsampled into replicates, each with counts comparable to one gate of a gated acquisition. Parametric images were produced by the indirect (separate reconstructions and modeling) and direct methods for each of eight low-count and eight normal-count replicates of the simulated data, and each of eight cardiac gates for the human data. For the direct method, two initialization schemes were tested: uniform initialization, and initialization with the filtered iteration 1 result of the indirect method. For the human dataset, event-by-event respiratory motion compensation was included. The indirect and direct methods were compared for the simulated dataset in terms of bias and coefficient of variation as a function of iteration. Convergence of direct reconstruction was slow with uniform initialization; lower bias was achieved in fewer iterations by initializing with the filtered indirect iteration 1 images. For most parameters and regions evaluated, the direct method achieved the same or lower absolute bias at matched iteration as the indirect method, with 23%-65% lower noise. Additionally, the direct method gave better contrast between the perfusion defect and surrounding normal tissue than the indirect method. Gated parametric images from the human dataset had comparable relative performance of indirect and direct, in terms of mean parameter values per iteration. Changes in myocardial wall thickness and blood pool size across gates were readily visible in the gated parametric images, with higher contrast between myocardium and left ventricle blood pool in parametric images than gated SUV images. Direct reconstruction can produce parametric images with less noise than the indirect method, opening the potential utility of gated parametric imaging for perfusion PET. © 2017 American Association of Physicists in Medicine.
Simulation of the Inferior Mirage
ERIC Educational Resources Information Center
Branca, Mario
2010-01-01
A mirage can occur when a continuous variation in the refractive index of the air causes light rays to follow a curved path. As a result, the image we see is displaced from the location of the object. If the image appears higher in the air than the object, it is called a "superior" mirage, while if it appears lower it is called an "inferior"…
Wang, Hongzhi; Yushkevich, Paul A.
2013-01-01
Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427
NASA Astrophysics Data System (ADS)
Petoussi-Henss, Nina; Becker, Janine; Greiter, Matthias; Schlattl, Helmut; Zankl, Maria; Hoeschen, Christoph
2014-03-01
In radiography there is generally a conflict between the best image quality and the lowest possible patient dose. A proven method of dosimetry is the simulation of radiation transport in virtual human models (i.e. phantoms). However, while the resolution of these voxel models is adequate for most dosimetric purposes, they cannot provide the required organ fine structures necessary for the assessment of the imaging quality. The aim of this work is to develop hybrid/dual-lattice voxel models (called also phantoms) as well as simulation methods by which patient dose and image quality for typical radiographic procedures can be determined. The results will provide a basis to investigate by means of simulations the relationships between patient dose and image quality for various imaging parameters and develop methods for their optimization. A hybrid model, based on NURBS (Non Linear Uniform Rational B-Spline) and PM (Polygon Mesh) surfaces, was constructed from an existing voxel model of a female patient. The organs of the hybrid model can be then scaled and deformed in a non-uniform way i.e. organ by organ; they can be, thus, adapted to patient characteristics without losing their anatomical realism. Furthermore, the left lobe of the lung was substituted by a high resolution lung voxel model, resulting in a dual-lattice geometry model. "Dual lattice" means in this context the combination of voxel models with different resolution. Monte Carlo simulations of radiographic imaging were performed with the code EGS4nrc, modified such as to perform dual lattice transport. Results are presented for a thorax examination.
Task Performance with List-Mode Data
NASA Astrophysics Data System (ADS)
Caucci, Luca
This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.
A Gauss-Seidel Iteration Scheme for Reference-Free 3-D Histological Image Reconstruction
Daum, Volker; Steidl, Stefan; Maier, Andreas; Köstler, Harald; Hornegger, Joachim
2015-01-01
Three-dimensional (3-D) reconstruction of histological slice sequences offers great benefits in the investigation of different morphologies. It features very high-resolution which is still unmatched by in-vivo 3-D imaging modalities, and tissue staining further enhances visibility and contrast. One important step during reconstruction is the reversal of slice deformations introduced during histological slice preparation, a process also called image unwarping. Most methods use an external reference, or rely on conservative stopping criteria during the unwarping optimization to prevent straightening of naturally curved morphology. Our approach shows that the problem of unwarping is based on the superposition of low-frequency anatomy and high-frequency errors. We present an iterative scheme that transfers the ideas of the Gauss-Seidel method to image stacks to separate the anatomy from the deformation. In particular, the scheme is universally applicable without restriction to a specific unwarping method, and uses no external reference. The deformation artifacts are effectively reduced in the resulting histology volumes, while the natural curvature of the anatomy is preserved. The validity of our method is shown on synthetic data, simulated histology data using a CT data set and real histology data. In the case of the simulated histology where the ground truth was known, the mean Target Registration Error (TRE) between the unwarped and original volume could be reduced to less than 1 pixel on average after 6 iterations of our proposed method. PMID:25312918
NASA Astrophysics Data System (ADS)
Tichauer, Kenneth M.
2016-03-01
One of the major complications with conventional imaging-agent-based molecular imaging, particularly for cancer imaging, is variability in agent delivery and nonspecific retention in biological tissue. Such factors can account to "swamp" the signal arising from specifically bound imaging agent, which is presumably indicative of the concentration of targeted biomolecule. In the 1950s, Pressman et al. proposed a method of accounting for these delivery and retention effects by normalizing targeted antibody retention to the retention of a co-administered "untargeted"/control imaging agent [1]. Our group resurrected the approach within the last 5 years, finding ways to utilize this so-called "paired-agent" imaging approach to directly quantify biomolecule concentration in tissue (in vitro, ex vivo, and in vivo) [2]. These novel paired-agent imaging approaches capable of quantifying biomolecule concentration provide enormous potential for being adapted to and optimizing molecular-guided surgery, which has a principle goal of identifying distinct biological tissues (tumor, nerves, etc…) based on their distinct molecular environment. This presentation will cover the principles and nuances of paired-agent imaging, as well as the current status of the field and future applications. [1] D. Pressman, E. D. Day, and M. Blau, "The use of paired labeling in the determination of tumor-localizing antibodies," Cancer Res, 17(9), 845-50 (1957). [2] K. M. Tichauer, Y. Wang, B. W. Pogue et al., "Quantitative in vivo cell-surface receptor imaging in oncology: kinetic modeling and paired-agent principles from nuclear medicine and optical imaging," Phys Med Biol, 60(14), R239-69 (2015).
A simple and inexpensive method of preoperative computer imaging for rhinoplasty.
Ewart, Christopher J; Leonard, Christopher J; Harper, J Garrett; Yu, Jack
2006-01-01
GOALS/PURPOSE: Despite concerns of legal liability, preoperative computer imaging has become a popular tool for the plastic surgeon. The ability to project possible surgical outcomes can facilitate communication between the patient and surgeon. It can be an effective tool in the education and training of residents. Unfortunately, these imaging programs are expensive and have a steep learning curve. The purpose of this paper is to present a relatively inexpensive method of preoperative computer imaging with a reasonable learning curve. The price of currently available imaging programs was acquired through an online search, and inquiries were made to the software distributors. Their prices were compared to Adobe PhotoShop, which has special filters called "liquify" and "photocopy." It was used in the preoperative computer planning of 2 patients who presented for rhinoplasty at our institution. Projected images were created based on harmonious discussions between the patient and physician. Importantly, these images were presented to the patient as potential results, with no guarantees as to actual outcomes. Adobe PhotoShop can be purchased for 900-5800 dollars less than the leading computer imaging software for cosmetic rhinoplasty. Effective projected images were created using the "liquify" and "photocopy" filters in PhotoShop. Both patients had surgical planning and operations based on these images. They were satisfied with the results. Preoperative computer imaging can be a very effective tool for the plastic surgeon by providing improved physician-patient communication, increased patient confidence, and enhanced surgical planning. Adobe PhotoShop is a relatively inexpensive program that can provide these benefits using only 1 or 2 features.
NASA Technical Reports Server (NTRS)
2002-01-01
The Moderate-resolution Imaging Spectroradiometer's (MODIS') cloud detection capability is so sensitive that it can detect clouds that would be indistinguishable to the human eye. This pair of images highlights MODIS' ability to detect what scientists call 'sub-visible cirrus.' The image on top shows the scene using data collected in the visible part of the electromagnetic spectrum-the part our eyes can see. Clouds are apparent in the center and lower right of the image, while the rest of the image appears to be relatively clear. However, data collected at 1.38um (lower image) show that a thick layer of previously undetected cirrus clouds obscures the entire scene. These kinds of cirrus are called 'sub-visible' because they can't be detected using only visible light. MODIS' 1.38um channel detects electromagnetic radiation in the infrared region of the spectrum. These images were made from data collected on April 4, 2000. Image courtesy Mark Gray, MODIS Atmosphere Team
NASA Technical Reports Server (NTRS)
1990-01-01
This image is a full-resolution mosaic of several Magellan images and is centered at 61 degrees north latitude and 341 degrees east longitude. The image is 250 kilometers wide (150 miles). The radar smooth region in the northern part of the image is Lakshmi Planum, a high plateau region roughly 3.5 kilometers (2.2 miles) above the mean planetary radius. Lakshmi Planum is ringed by intensely deformed terrain, some of which is shown in the southern portion of the image and is called Clotho Tessera. The 64-kilometer (40 mile) diameter circular feature in the image is a depression called Siddons and may be a volcanic caldera. This view is supported by the collapsed lava tubes surrounding the feature. By carefully studying this and other surrounding images scientists hope to discover what tectonic and volcanic processes formed this complex region. The solid black parts of the image represent data gaps that may be filled in by the Magellan extended mission.
Mapping brain activity in gradient-echo functional MRI using principal component analysis
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Singh, Manbir; Don, Manuel
1997-05-01
The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.
Quantitative x-ray phase imaging at the nanoscale by multilayer Laue lenses
Yan, Hanfei; Chu, Yong S.; Maser, Jörg; Nazaretski, Evgeny; Kim, Jungdae; Kang, Hyon Chol; Lombardo, Jeffrey J.; Chiu, Wilson K. S.
2013-01-01
For scanning x-ray microscopy, many attempts have been made to image the phase contrast based on a concept of the beam being deflected by a specimen, the so-called differential phase contrast imaging (DPC). Despite the successful demonstration in a number of representative cases at moderate spatial resolutions, these methods suffer from various limitations that preclude applications of DPC for ultra-high spatial resolution imaging, where the emerging wave field from the focusing optic tends to be significantly more complicated. In this work, we propose a highly robust and generic approach based on a Fourier-shift fitting process and demonstrate quantitative phase imaging of a solid oxide fuel cell (SOFC) anode by multilayer Laue lenses (MLLs). The high sensitivity of the phase to structural and compositional variations makes our technique extremely powerful in correlating the electrode performance with its buried nanoscale interfacial structures that may be invisible to the absorption and fluorescence contrasts. PMID:23419650
Stereo imaging with spaceborne radars
NASA Technical Reports Server (NTRS)
Leberl, F.; Kobrick, M.
1983-01-01
Stereo viewing is a valuable tool in photointerpretation and is used for the quantitative reconstruction of the three dimensional shape of a topographical surface. Stereo viewing refers to a visual perception of space by presenting an overlapping image pair to an observer so that a three dimensional model is formed in the brain. Some of the observer's function is performed by machine correlation of the overlapping images - so called automated stereo correlation. The direct perception of space with two eyes is often called natural binocular vision; techniques of generating three dimensional models of the surface from two sets of monocular image measurements is the topic of stereology.
VIEW-Station software and its graphical user interface
NASA Astrophysics Data System (ADS)
Kawai, Tomoaki; Okazaki, Hiroshi; Tanaka, Koichiro; Tamura, Hideyuki
1992-04-01
VIEW-Station is a workstation-based image processing system which merges the state-of-the- art software environment of Unix with the computing power of a fast image processor. VIEW- Station has a hierarchical software architecture, which facilitates device independence when porting across various hardware configurations, and provides extensibility in the development of application systems. The core image computing language is V-Sugar. V-Sugar provides a set of image-processing datatypes and allows image processing algorithms to be simply expressed, using a functional notation. VIEW-Station provides a hardware independent window system extension called VIEW-Windows. In terms of GUI (Graphical User Interface) VIEW-Station has two notable aspects. One is to provide various types of GUI as visual environments for image processing execution. Three types of interpreters called (mu) V- Sugar, VS-Shell and VPL are provided. Users may choose whichever they prefer based on their experience and tasks. The other notable aspect is to provide facilities to create GUI for new applications on the VIEW-Station system. A set of widgets are available for construction of task-oriented GUI. A GUI builder called VIEW-Kid is developed for WYSIWYG interactive interface design.
Image reconstruction of x-ray tomography by using image J platform
NASA Astrophysics Data System (ADS)
Zain, R. M.; Razali, A. M.; Salleh, K. A. M.; Yahya, R.
2017-01-01
A tomogram is a technical term for a CT image. It is also called a slice because it corresponds to what the object being scanned would look like if it were sliced open along a plane. A CT slice corresponds to a certain thickness of the object being scanned. So, while a typical digital image is composed of pixels, a CT slice image is composed of voxels (volume elements). In the case of x-ray tomography, similar to x-ray Radiography, the quantity being imaged is the distribution of the attenuation coefficient μ(x) within the object of interest. The different is only on the technique to produce the tomogram. The image of x-ray radiography can be produced straight foward after exposed to x-ray, while the image of tomography produces by combination of radiography images in every angle of projection. A number of image reconstruction methods by converting x-ray attenuation data into a tomography image have been produced by researchers. In this work, Ramp filter in "filtered back projection" has been applied. The linear data acquired at each angular orientation are convolved with a specially designed filter and then back projected across a pixel field at the same angle. This paper describe the step of using Image J software to produce image reconstruction of x-ray tomography.
Acoustic radiosity for computation of sound fields in diffuse environments
NASA Astrophysics Data System (ADS)
Muehleisen, Ralph T.; Beamer, C. Walter
2002-05-01
The use of image and ray tracing methods (and variations thereof) for the computation of sound fields in rooms is relatively well developed. In their regime of validity, both methods work well for prediction in rooms with small amounts of diffraction and mostly specular reflection at the walls. While extensions to the method to include diffuse reflections and diffraction have been made, they are limited at best. In the fields of illumination and computer graphics the ray tracing and image methods are joined by another method called luminous radiative transfer or radiosity. In radiosity, an energy balance between surfaces is computed assuming diffuse reflection at the reflective surfaces. Because the interaction between surfaces is constant, much of the computation required for sound field prediction with multiple or moving source and receiver positions can be reduced. In acoustics the radiosity method has had little attention because of the problems of diffraction and specular reflection. The utility of radiosity in acoustics and an approach to a useful development of the method for acoustics will be presented. The method looks especially useful for sound level prediction in industrial and office environments. [Work supported by NSF.
Image Filtering with Boolean and Statistical Operators.
1983-12-01
S3(2) COMPLEX AMAT(256, 4). BMAT (256. 4). CMAT(256. 4) CALL IOF(3. MAIN. AFLNM. DFLNI, CFLNM. MS., 82, S3) CALL OPEN(1.AFLNM* 1.IER) CALL CHECKC!ER...RDBLK(2. 6164. MAT. 16, IER) CALL CHECK(IER) DO I K-1. 4 DO I J-1.256 CMAT(J. K)-AMAT(J. K)’. BMAT (J. K) I CONTINUE S CALL WRBLK(3. 164!. CMAT. 16. IER
NASA Astrophysics Data System (ADS)
Herman, J. R.; Boccara, M.; Albers, S. C.
2017-12-01
The Earth Polychromatic Imaging Camera (EPIC) onboard the DSCOVR satellite continuously views the sun-illuminated portion of the Earth with spectral coverage in the visible band, among others. Ideally, such a system would be able to provide a video with continuous coverage up to real time. However due to limits in onboard storage, bandwidth, and antenna coverage on the ground, we can receive at most 20 images a day, separated by at least one hour. Also, the processing time to generate the visible image out of the separate RGB channels delays public images delivery by a day or two. Finally, occasional remote tuning of instruments can cause several day periods where the imagery is completely missing. We are proposing a model-based method to fill these gaps and restore images lost in real-time processing. We are combining two sets of algorithms. The first, called Blueturn, interpolates successive images while projecting them on a 3-D model of the Earth, all this being done in real-time using the GPU. The second, called Simulated Weather Imagery (SWIM), makes EPIC-like images utilizing a ray-tracing model of scattering and absorption of sunlight by clouds, atmospheric gases, aerosols, and land surface. Clouds are obtained from 3-D gridded analyses and forecasts using weather modeling systems such as the Local Analysis and Prediction System (LAPS), and the Flow-following finite-volume Finite Icosahedral Model (FIM). SWIM uses EPIC images to validate its models. Typical model grid spacing is about 20km and is roughly commensurate with the EPIC imagery. Calculating one image per hour is enough for Blueturn to generate a smooth video. The synthetic images are designed to be visually realistic and aspire to be indistinguishable from the real ones. Resulting interframe transitions become seamless, and real-time delay is reduced to 1 hour. With Blueturn already available as a free online app, streaming EPIC images directly from NASA's public website, and with another SWIM server to ensure constant interval between key images, this work brings transcendance to EPIC's tribute. Enriched by two years of actual service in space, the most real holistic view of the Earth will be continued at a high degree of fidelity, regardless of EPIC limitations or interruptions.
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard
2012-01-01
The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error plots are provided for both the simulations and actual data analyses.
NASA Astrophysics Data System (ADS)
Li, M.; Yu, T.; Chunliang, X.; Zuo, X.; Liu, Z.
2017-12-01
A new method for estimating the equatorial plasma bubbles (EPBs) motions from airglow emission all-sky images is presented in this paper. This method, which is called 'cloud-derived wind technology' and widely used in satellite observation of wind, could reasonable derive zonal and meridional velocity vectors of EPBs drifts by tracking a series of successive airglow 630.0 nm emission images. Airglow emission images data are available from an all sky airglow camera in Hainan Fuke (19.5°N, 109.2°E) supported by China Meridional Project, which can receive the 630.0nm emission from the ionosphere F region at low-latitudes to observe plasma bubbles. A series of pretreatment technology, e.g. image enhancement, orientation correction, image projection are utilized to preprocess the raw observation. Then the regions of plasma bubble extracted from the images are divided into several small tracing windows and each tracing window can find a target window in the searching area in following image, which is considered as the position tracing window moved to. According to this, velocities in each window are calculated by using the technology of cloud-derived wind. When applying the cloud-derived wind technology, the maximum correlation coefficient (MCC) and the histogram of gradient (HOG) methods to find the target window, which mean to find the maximum correlation and the minimum euclidean distance between two gradient histograms in respectively, are investigated and compared in detail. The maximum correlation method is fianlly adopted in this study to analyze the velocity of plasma bubbles because of its better performance than HOG. All-sky images from Hainan Fuke, between August 2014 and October 2014, are analyzed to investigate the plasma bubble drift velocities using MCC method. The data at different local time at 9 nights are studied and find that zonal drift velocity in different latitude at different local time ranges from 50 m/s to 180 m/s and there is a peak value at about 20°N. For comparison and validation, EPBs motions obtained from three traditional methods are also investigated and compared with MC method. The advantages and disadvantages of using cloud-derived wind technology to calculate EPB drift velocity are discussed.
Patel, Meenal J; Andreescu, Carmen; Price, Julie C; Edelman, Kathryn L; Reynolds, Charles F; Aizenstein, Howard J
2015-10-01
Currently, depression diagnosis relies primarily on behavioral symptoms and signs, and treatment is guided by trial and error instead of evaluating associated underlying brain characteristics. Unlike past studies, we attempted to estimate accurate prediction models for late-life depression diagnosis and treatment response using multiple machine learning methods with inputs of multi-modal imaging and non-imaging whole brain and network-based features. Late-life depression patients (medicated post-recruitment) (n = 33) and older non-depressed individuals (n = 35) were recruited. Their demographics and cognitive ability scores were recorded, and brain characteristics were acquired using multi-modal magnetic resonance imaging pretreatment. Linear and nonlinear learning methods were tested for estimating accurate prediction models. A learning method called alternating decision trees estimated the most accurate prediction models for late-life depression diagnosis (87.27% accuracy) and treatment response (89.47% accuracy). The diagnosis model included measures of age, Mini-mental state examination score, and structural imaging (e.g. whole brain atrophy and global white mater hyperintensity burden). The treatment response model included measures of structural and functional connectivity. Combinations of multi-modal imaging and/or non-imaging measures may help better predict late-life depression diagnosis and treatment response. As a preliminary observation, we speculate that the results may also suggest that different underlying brain characteristics defined by multi-modal imaging measures-rather than region-based differences-are associated with depression versus depression recovery because to our knowledge this is the first depression study to accurately predict both using the same approach. These findings may help better understand late-life depression and identify preliminary steps toward personalized late-life depression treatment. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Carloganu, Cristina; Le Ménédeu, Eve
2016-04-01
High energy atmospheric muons have high penetration power that renders them appropriate for geophysical studies. Provided the topography is known, the measurement of the muon flux transmittance leads in an univoque way to 2D density mapping (so called radiographic images) revealing spatial and possibly also temporal variations. Obviously, several radiographic images could be combined into 3D tomographies, though the inverse 3D problem is generally ill-posed. The muography has a high potential for imaging remotely (from kilometers away) and with high resolution (better than 100 mrad2) volcanoes. The experimental and methodological task is however not straightforward since atmospheric muons have non trivial spectra that fall rapidly with muon energy. As shown in [Ambrosino 2015] successfully imaging km-scale volcanoes remotely requires state-of-the art, high-resolution and large-scale muon detectors. This contribution presents the geophysical motivation for muon imaging as well as the first quantitative density radiographies of Puy de Dôme volcano obtained by the TOMUVOL collaboration using a highly segmented muon telescope based on Glass Resistive Plate Chambers. In parallel with the muographic studies, the volcano was imaged through standard geophysical methods (gravimetry, electrical resistivity) [Portal 2013] allowing in depth comparisons of the different methods. Ambrosino, F., et al. (2015), Joint measurement of the atmospheric muon flux through the Puy de Dôme volcano with plastic scintillators and Resistive Plate Chambers detectors, J. Geophys. Res. Solid Earth, 120, doi:10.1002/2015JB011969 A. Portal et al (2013) , "Inner structure of the Puy de Dme volcano: cross-comparison of geophysical models (ERT, gravimetry, muon imaging)", Geosci. Instrum. Method. Data Syst., 2, 47-54, 2013
Utilizing HDTV as Data for Space Flight
NASA Technical Reports Server (NTRS)
Grubbs, Rodney; Lindblom, Walt
2006-01-01
In the aftermath of the Space Shuttle Columbia accident February 1, 2003, the Columbia Accident Investigation Board recognized the need for better video data from launch, on-orbit, and landing to assess the status and safety of the shuttle orbiter fleet. The board called on NASA to improve its imagery assets and update the Agency s methods for analyzing video. This paper will feature details of several projects implemented prior to the return to flight of the Space Shuttle, including an airborne HDTV imaging system called the WB-57 Ascent Video Experiment, use of true 60 Hz progressive scan HDTV for ground and airborne HDTV camera systems, and the decision to utilize a wavelet compression system for recording. This paper will include results of compression testing, imagery from the launch of STS-114, and details of how commercial components were utilized to image the shuttle launch from an aircraft flying at 400 knots at 60,000 feet altitude. The paper will conclude with a review of future plans to expand on the upgrades made prior to return to flight.